We were lacking tests that are checking for the completeness of our
nodedev XMLs and also whether we output properly formatted ones. This
patch adds parsing for the capability elements inside the <capability
type='pci'> element. Also bunch of tests are added to show everything
works properly.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There were few things done in the nodedev code but we were lacking tests
for it. And because of that we missed that the schema was not updated
either. Fix the schema and add various test files to show the schema
is correct.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We had both and the only difference was that the latter also included
information about multifunction setting. The problem with that was that
we couldn't use functions made for only one of the structs (e.g.
parsing). To consolidate those two structs, use the one in virpci.h,
include that in domain_conf.h and add the multifunction member in it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Rather than take username and password as parameters, now take
a qemuDomainSecretInfoPtr and decode within the function.
NB: Having secinfo implies having the username for a plain type
from a successful virSecretGetSecretString
Signed-off-by: John Ferlan <jferlan@redhat.com>
Similar to the qemuDomainSecretDiskPrepare, generate the secret
for the Hostdev's prior to call qemuProcessLaunch which calls
qemuBuildCommandLine. Additionally, since the secret is not longer
added as part of building the command, the hotplug code will need
to make the call to add the secret in the hostdevPriv.
Since this then is the last requirement to pass a virConnectPtr
to qemuBuildCommandLine, we now can remove that as part of these
changes. That removal has cascading effects through various callers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Modeled after the qemuDomainDiskPrivatePtr logic, create a privateData
pointer in the _virDomainHostdevDef to allow storage of private data
for a hypervisor in order to at least temporarily store auth/secrets
data for usage during qemuBuildCommandLine.
NB: Since the qemu_parse_command (qemuParseCommandLine) code is not
expecting to restore the auth/secret data, there's no need to add
code to handle this new structure there.
Updated copyrights for modules touched. Some didn't have updates in a
couple years even though changes have been made.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than needing to pass the conn parameter to various command
line building API's, add qemuDomainSecretPrepare just prior to the
qemuProcessLaunch which calls qemuBuilCommandLine. The function
must be called after qemuProcessPrepareHost since it's expected
to eventually need the domain masterKey generated during the prepare
host call. Additionally, future patches may require device aliases
(assigned during the prepare domain call) in order to associate
the secret objects.
The qemuDomainSecretDestroy is called after the qemuProcessLaunch
finishes in order to clear and free memory used by the secrets
that were recently prepared, so they are not kept around in memory
too long.
Placing the setup here is beneficial for future patches which will
need the domain masterKey in order to generate an encrypted secret
along with an initialization vector to be saved and passed (since
the masterKey shouldn't be passed around).
Finally, since the secret is not added during command line build,
the hotplug code will need to get the secret into the private disk data.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Introduce a new private structure to hold qemu domain auth/secret data.
This will be stored in the qemuDomainDiskPrivate as a means to store the
auth and fetched secret data rather than generating during building of
the command line.
The initial changes will handle the current username and secret values
for rbd and iscsi disks (in their various forms). The rbd secret is
stored as a base64 encoded value, while the iscsi secret is stored as
a plain text value. Future changes will store encoded/encrypted secret
data as well as an initialization vector needed to be given to qemu
in order to decrypt the encoded password along with the domain masterKey.
The inital assumption will be that VIR_DOMAIN_SECRET_INFO_PLAIN is
being used.
Although it's expected that the cleanup of the secret data will be
done immediately after command line generation, reintroduce the object
dispose function qemuDomainDiskPrivateDispose to handle removing
memory associated with the structure for "normal" cleanup paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After killing one of the conditionals it's now guaranteed to have
@drivealias populated when calling the monitor, so the code attempting
to cleanup can be simplified.
For strange reasons if a perf event type was not supported or failed to
be enabled at VM start libvirt would ignore the failure.
On the other hand on restart if the event could not be re-enabled
libvirt would fail to reconnect to the VM and kill it.
Both don't make really sense. Fix it by failing to start the VM if the
event is not supported and change the event to disabled if it can't be
reconnected (unlikely).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1329045
Both disk->src->shared and disk->src->readonly can't be modified when
changing disk source for floppy and cdrom drives since both arguments
are passed as arguments of the disk rather than the image in qemu.
Historically these fields have only two possible values since they are
represented as XML thus we need to ignore if user did not provide them
and thus we are treating them as false.
If qemu doesn't support DEVICE_TRAY_MOVED event the code that attempts
to change media would attempt to re-eject the tray even if it wouldn't
be notified when the tray opened. Add a capability bit and skip retrying
for old qemus.
Empty floppy drives start with tray in "open" state and libvirt did not
refresh it after startup. The code that inserts media into the tray then
waited until the tray was open before inserting the media and thus
floppies could not be inserted.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1326660
These are wrappers over virStreamRecv and virStreamSend so that
users have to care about nothing but writing data into / reading
data from a sink (typically a file). Note, that these wrappers
are used exclusively on client side as the daemon has slightly
different approach. Anyway, the wrappers allocate this buffer and
use it for intermediate data storage until the data is passed to
stream to send, or to the client application. So far, we are
using 64KB buffer. This is enough, but suboptimal because server
can send messages up to VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX bytes
big (262120B, roughly 256KB). So if we make the buffer this big,
a single message containing the data is sent instead of four,
which is current situation. This means lower overhead, because
each message contains a header which needs to be processed, each
message is processed roughly same amount of time regardless of
its size, less bytes need to be sent through the wire, and so on.
Note that since server will never sent us a stream message bigger
than VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX there's no point in
sizing up the client buffer past this threshold.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There are two functions on the client that handle incoming stream
data. The first one virNetClientStreamQueuePacket() is a low
level function that just processes the incoming stream data from
the socket and stores it into an internal structure. This happens
in the client event loop therefore the shorter the callbacks are,
the better. The second function virNetClientStreamRecvPacket()
then handles copying data from internal structure into a client
provided buffer.
Change introduced in this commit makes just that: new queue for
incoming stream packets is introduced. Then instead of copying
data into intermediate internal buffer and then copying them into
user buffer, incoming stream messages are queue into the queue
and data is copied just once - in the upper layer function
virNetClientStreamRecvPacket(). In the end, there's just one
copying of data and therefore shorter event loop callback. This
should boost the performance which has proven to be the case in
my testing.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This reverts commit d9c9e138f2.
Unfortunately, things are going to be handled differently so this
commit must go.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The man page says: "(Re)-Connect to the hypervisor. When the shell is
first started, this is automatically run with the URI parameter
requested by the "-c" option on the command line." However, if you run:
virsh -c 'test://default' 'connect; uri'
the output will not be 'test://default'. That's because the 'connect'
command does not care about any virsh-only related settings and if it is
run without parameters, it connects with @uri == NULL. Not only that
doesn't comply to what the man page describes, but it also doesn't make
sense. It also means you aren't able to reconnect to whatever you are
connected currently.
So let's fix that in both virsh and virt-admin add a test case for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This reverts commit 6e244c659f, which
added support to qemu for the "peer" attribute in domain interface <ip>
elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
Conflicts:
tests/qemuxml2argvmock.c - a mock of virNetDevSetOnline() was added
which may be assumed by other tests added since the original commit,
so it isn't being reverted.
This reverts commit afee47d07c, which
added support to lxc for the "peer" attribute in domain interface <ip>
elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
This reverts commit 690969af9c, which
added the domain config parts to support a "peer" attribute in domain
interface <ip> elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
FD passing APIs like CreateXMLWithFiles or OpenGraphicsFD will leak
file descriptors. The user passes in an fd, which is dup()'d in
virNetClientProgramCall. The new fd is what is transfered to the
server virNetClientIOWriteMessage.
Once all the fds have been written though, the parent msg->fds list
is immediately free'd, so the individual fds are never closed.
This closes each FD as its send to the server, so all fds have been
closed by the time msg->fds is free'd.
https://bugzilla.redhat.com/show_bug.cgi?id=1159766
If we want to delete all disks for container or vm
we should make a loop from 0 to NumberOfDisks and always
use zero index in PrlVmCfg_GetHardDisk to get disk handle.
When we delete first disk after that numbers of other disks
will be changed, start from 0 to NumberOfDisks-1.
That's why we should always use zero index.