In VirtualBox SAS and SCSI are separate controller types whereas libvirt
does not make such distinction. This patch adds support for attaching
the VBOX SAS controllers by mapping the 'lsisas1068' controller model in
libvirt XML to VBOX SAS controller type. If VBOX VM has disks attached
to both SCSI and SAS controller libvirt domain XML will have two
<controller type='scsci'> elements with index and model attributes set
accordingly. In this case, each respective <disk> element must have
<address> element specified to assign it to respective SCSI controller.
This patch adds <address> element to each <disk> device since device
names alone won't adequately reflect the storage device layout in the
VM. With this patch, the ouput produced by dumpxml will faithfully
reproduce the storage layout of the VM if used with define.
Previously any removable storage device without media attached was
omitted from domain XML dump. They're still (rightfully) omitted in
snapshot XML dump but need to be accounted properly to for the device
names to stay in 'sync' between domain and snapshot XML dumps.
Primer the code for further changes:
* move variable declarations to the top of the function
* group together free/release statements
* error check and report VBOX API calls used
If a VBOX VM has e.g. a SATA and SCSI disk attached, the XML generated
by dumpxml used to produce "sda" for both of those disks. This is an
invalid domain XML as libvirt does not allow duplicate device names. To
address this, keep the running total of disks that will use "sd" prefix
for device name and pass it to the vboxGenerateMediumName which no
longer tries to "compute" the value based only on current and max
port and slot values. After this the vboxGetMaxPortSlotValues is not
needed and was deleted.
Both vboxSnapshotGetReadWriteDisks and vboxSnapshotGetReadWriteDisks do
not need to free the def->disks on cleanup because it's being done by
the caller via virDomainSnaphotDefFree
This patch prepares the vboxSnapshotGetReadOnlyDisks and
vboxSnapshotGetReadWriteDisks functions for further changes so that
the code movement does not obstruct the gist of those future changes.
This is done primarily because we'll need to know the type of vbox
storage controller as early as possible and make decisions based on
that info.
With this patch, the vbox driver will no longer attach all supported
storage controllers by default even if no disk devices are associated
with them. Instead, it will attach only those that are implicitly added
by virDomainDefAddImplicitController based on <disk> element or if
explicitly specified via the <controller> element.
Since the VBOX API requires to register an initial VM before proceeding
to attach any remaining devices to it, any failure to attach such
devices should result in automatic cleanup of the initially registered
VM so that the state of VBOX registry remains clean without any leftover
"aborted" VMs in it. Failure to cleanup of such partial VMs results in a
warning log so that actual define error stays on the top of the error
stack.
Since qemu 2.9 via 9103f1ce "file-posix: Consider max_segments for
BlockLimits.max_transfer" this is a new access that is denied by the
qemu profile.
It is non fatal, but prevents the fix mentioned to actually work.
It should be safe to allow reading from that path.
Since qemu opens a symlink path we need to translate that for apparmor from
"/sys/dev/block/*/queue/max_segments" to
"/sys/devices/**/block/*/queue/max_segments"
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Add a new test program called 'qemublocktest' to test the block layer
related stuff and test storage source to JSON generator by comparing it
to the JSON parser.
Libvirt historically stores storage source path including the volume as
one string in the XML, but that is not really flexible enough when
dealing with the fields in the code. Previously we'd store the slash
separating the two as part of the image name. This was fine for gluster
but it's not necessary and does not scale well when converting other
protocols.
Don't store the slash as part of the path. The resulting change from
absolute to relative path within the gluster driver should be okay,
as the root directory is the default when accessing gluster.
Extract the part formatting the basic URI part so that it can be reused
to format JSON backing definitions. Parts specific to the command line
format will remain in qemuBuildNetworkDriveURI. The new function is
called qemuBlockStorageSourceGetURI.
Original implementation used 'SocketAddress' equivalent from qemu for
the disk server field, while qemu documentation specifies
'InetSocketAddress'. The backing store parser uses the correct parsing
function but the formatter used the incorrect one (and also with the
legacy mode enabled which was wrong).
To allow merging this with other disk type checks we need to check
qemuCaps only when available, since some of the checks are executed on
disk cold-plug and thus capabilities should not be checked.
Make the checks optional by making them conditional on qemuCaps not
being NULL.
All of the error message are already in a conditional block with known
bus type. Inline the bus type rather than formatting it from a separate
variable.
The disk index validation is used only in very specific cases and does
not need to be performed otherwise. Move it out of the global check into
the usage place.
busid and unitid are ever used only if the device is an SD card due to
the check in qemuDiskBusNeedsDeviceArg. Since the SD card does not have
an bus or unit number, most of the code and command line formatter can
be removed since it will never be used.
In near future we will need more than just a plain VIR_STRDUP().
Better implement that in a separate function and in
qemuBuildMemoryBackendStr() which is complicated enough already.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This function works over domain definition and not domain object.
Its name is thus misleading.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
After the virNetDaemonAddServerPostExec call in virtlogd we should have
netserver refcount set to 2. One goes to netdaemon servers hashtable
and one goes to virt{logd,lock} own reference to netserver. Let's add
the missing increment in virNetDaemonAddServerPostExec itself while
holding the daemon lock.
Since lockd defers management of the @srv object by the presence
in the hash table, virLockDaemonNewPostExecRestart must Unref the
alloc'd Ref on the @srv object done as part of virNetDaemonAddServerPostExec
and virNetServerNewPostExecRestart processing. The virNetDaemonGetServer
in lock_daemon main will also take a reference which is Unref'd during
main cleanup.
Commit id '252610f7d' used a hash table to store the @srv, but
didn't handle the virObjectUnref if virNetDaemonNew failed nor
did it use virObjectUnref once successfully placed into the table
which will now be managing it's lifetime (and would cause the
virObjectRef if successfully inserted into the table).
When coverage build is enabled, gcc complains about it:
In file included from qemu/qemu_agent.h:29:0,
from qemu/qemu_driver.c:47:
qemu/qemu_driver.c: In function 'qemuDomainSetInterfaceParameters':
./conf/domain_conf.h:3397:1: error: inlining failed in call to
'virDomainNetTypeSharesHostView': call is unlikely and code size would
grow [-Werror=inline]
virDomainNetTypeSharesHostView(const virDomainNetDef *net)
^
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In some cases there's dangling backward slash at the end of multi
line macros. While technically the code works, it will stop if
some empty lines are removed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>