As we already test that the extraction of the backing store string works
well additional tests for the backing store string parser can be made
simpler.
Export virStorageSourceNewFromBackingAbsolute and use it to parse the
backing store strings, format them using virDomainDiskSourceFormat and
match them against expected XMLs.
Failure to parse a XML that was not supposed to fail would result into a
crash in the test suite as the vcpu bitmap would not be filled prior to
the active XML->XML test.
Skip formatting of the vcpu snippet in the fake status XML formatter in
such case to avoid the crash. The test would fail anyways.
Nothing in the code path after the removed call has needs/uses the alias
anyway (as would be the case for command line building or talking to monitor).
The alias is VIR_FREE'd in virDomainDeviceInfoClear which is called for any
device that needs/uses an alias via virDomainDeviceDefFree or virDomainDefFree
as well as during virDomainDeviceInfoFree for host devices.
For persistent domains, the domain definition (including aliases) gets
freed a few screens later when it's replaced with newDef.
For transient domains, the definition is freed/unref'd along with the
virDomainObj a few moments later.
Introduce initial support for domainBlockStats API call that
allow us to query block device statistics. OpenStack nova
uses this API call to query block statistics, alongside
virDomainMemoryStats and virDomainInterfaceStats. Note that
this patch only introduces it for VBD for starters. QDisk
would come in a separate patch series.
A new statistics data structure is introduced to fit common
statistics among others specific to the underlying block
backends. For the VBD statistics on linux these are exported
via sysfs on the path:
"/sys/bus/xen-backend/devices/vbd-<domid>-<devid>/statistics"
To calculate the block devno libxlDiskPathToID is introduced.
Each backend implements its own function to extract statistics,
allowing support for multiple backends and different platforms.
VBD stats are exposed in reqs and number of sectors from
blkback, and it's up to us to convert it to sector sizes.
The sector size is gathered through xenstore in the device
backend entry "physical-sector-size".
BlockStatsFlags variant is also implemented which has the
added benefit of getting the number of flush requests.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
QEMU reports timestamp along with other memory statistics, but this information is not saved into domain statistics.
It could be useful to determine if the data reported is fresh or not.
Balloon statistics are not reported in hrf, so no modifications are made in qemu_monitor_text.c.
Signed-off-by: Derbyshev Dmitry <dderbyshev@virtuozzo.com>
'memtotal' in virtio drivers and qemu corresponds to 'available' in libvirt.
Because of that, 'stat-available-memory' is renamed into 'usable'.
Balloon statistics are not reported in hrf, so no modifications are made in qemu_monitor_text.c.
Signed-off-by: Derbyshev Dmitry <dderbyshev@virtuozzo.com>
Dropping the caching of ccw address set.
The cached set is not required anymore, because the set is now being
recalculated from the domain definition on demand, so the cache
can be deleted.
The address sets (pci, ccw, virtio serial) are currently cached
in qemu private data, but all the information required to recreate
these sets is in the domain definition. Therefore I am removing
the redundant data and adding a way to recalculate these sets.
Add a function that calculates the ccw address set
from the domain definition.
Dropping the caching of virtio serial address set.
The cached set is not required anymore, because the set is now being
recalculated from the domain definition on demand, so the cache
can be deleted.
Credit goes to Cole Robinson.
Dropping the caching of virtio serial address set.
Instead of using the cached address set, a function in qemu_hotplug.c
now recalculates it on demand.
Credit goes to Cole Robinson.
The address sets (pci, ccw, virtio serial) are currently cached
in qemu private data, but all the information required to recreate
these sets is in the domain definition. Therefore I am removing
the redundant data and adding a way to recalculate these sets.
Add a function that calculates the virtio serial address set
from the domain definition.
Credit goes to Cole Robinson.
The symbol being missing has been reported as causing build
failures on OS X. If it's not already defined, define it to
zero so that it won't have any effect.
Commit 4a585a88 introduced searching QOM device path by alias, let's use it for
memballoon too. This may speedup the search because in most cases we will find
the correct QOM device path directly by using alias without the need for the
recursion code.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit ce745914 introduced detection of actual video ram sizes to fix migration
if QEMU decide to modify the values provided by libvirt. This works perfectly
for domains with number of video devices up to two.
If there are more than two video devices in the guest all the secondary devices
in the XML will have the same memory values. This is because our current code
search for QOM device path only by the device type name and all the secondary
video devices has the same name "qxl".
This patch introduces a new search function that will try to search a QOM device
path using also device's alias if the alias is available. After that it will
fallback to the old recursive code if the alias search found no results.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1358728
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Previously, qemuDomainAttachDeviceFlags was doing two things:
handling the job and attaching devices. Now the second part is
in a new function.
This change is required to make it possible to test more complex
device attachment situations, like attaching a device to both
config and live at once.
We want to be able to pass a NULL instead of the connection
and use this function in tests. To achieve this, the virConnectPtr
is passed instead of virDomainPtr, and the driver is a new separate
parameter.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There's a plan to rework the address handling, so testcases
that verify hotplugging ccw devices will help in avoiding
regression.
In this commit, some files are duplicated because of the way
qemuhotplug.c calculates the expected xml filenames.
I plan on changing that to explicitly stating the basis domain
xml, the device xml, and the expected xml.
So commit 306b3a8504 tried mimicking behaviour of commit 540c339a25, but
added a virObjectRef(vm) only after virDomainObjListAdd() in
lxcDomainDefineXMLFlags() and not in lxcDomainCreateXMLWithFiles().
That way undefining a domain that was started with different XML than
defined will leave the domain object in a state with not enough
references to then remove it. Hence any lxcDomainDestroyFlags() called
afterwards crashes the daemon.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1351057
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since return code is checked globally at the end of the function, let's
make sure that we set it correctly at any point.
This fixes a regression introduced in commit 0aa19f35 where the first
command to eject changeable media would fail unconditionally.
Signed-off-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
This patch forces container's init process, to become a session leader,
that is its session ID is made the same as its process ID.
That might seem unnecessary in general, but if we want to checkpoint a
container with CRIU, which is needed for container migration,
we must ensure that the SID of each process inside the container points
to a process that lives in the same PID namespace as the container.
Therefore, we force that the session leader is the init.
Signed-off-by: Katerina Koukiou <k.koukiou@gmail.com>
When parsing a command line with USB devices that have
no address specified, QEMU automatically adds a USB hub
if the device would fill up all the available USB ports.
To help most of the users, add one hub if there are more
USB devices than available ports. For wilder configurations,
expect the user to provide us with more hubs and/or controllers.
Walk through all the usb hubs in the domain definition
that have a USB address specified, create the
corresponding structures in the virDomainUSBAddressSet
and mark the port it occupies as used.
A new type to track USB addresses.
Every <controller type='usb' index='i'/> is represented by an
object of type virDomainUSBAddressHub located at buses[i].
Each of these hubs has up to 'nports' ports.
If a port is occupied, it has the corresponding bit set in
the 'ports' bitmap, e.g. port 1 would have the 0th bit set.
If there is a hub on this port, then hubs[i] will point
to this hub.
Undefine procedure drops domain lock while waiting for detaching
disks vz sdk call. Meanwhile vz sdk event domain-config-changed
arrives, its handler finds domain and is blocked waiting for job
condition. After undefine API call finishes event processing procedes
and tries to refreshes domain config thru existing vz sdk domain handle.
Domain does not exists anymore and event processing fails. Everything
is fine we just don't want to see error message in log for this
particular case.
Fortunately domain has flag that domain is removed from list. This
also imply that vz sdk domain is also undefined. Thus if we check
for this flag right after domain is locked again on accuiring
job condition we gracefully handle this situation.
Actually the race can happen in other situations too. Any
time we wait for job condition in mutualy exclusive job in
time when we acquire it vz sdk domain can cease to exist.
So instead of general internal error we can return domain
not found which is easier to handle. We don't need to patch
other places in mutually exclusive jobs where domain lock
is dropped as if job is started domain can't be undefine
by mutually exclusive undefine job.
The code of this patch is quite similar to qemu driver checks
for is domain is active after acquiring a job. The difference
only while qemu domain is operational while process is active
vz domain is operational while domain exists.
Current vz driver implementation is not usable when it comes to
long runnig operations. Migration or saving a domain blocks all
other operations even query ones which are expecteted to be available.
This patch addresses this problem.
All vz driver API calls fall into next 3 groups:
1. only query domain cache (virDomainObj, vz cache statistic)
examples are vzDomainGetState, vzDomainGetXMLDesc etc.
2. use thread shared sdkdom object
examples are vzDomainSetMemoryFlags, vzDomainAttachDevice etc.
3. use no thread shared sdkdom object nor domain cache
examples are vzDomainSnapshotListNames, vzDomainSnapshotGetXMLDesc etc
API calls from group 1 don't need to be changed as they hold domain lock only
for short period of time. These calls [1] are easily distinguished. They query
domain object thru libvirt common code or query vz sdk statistics handle thru
vz sdk sync operations.
vzDomainInterfaceStats is the only exception. It uses sdkdom object to
convert interface name to its vz sdk stack index which could not be saved in
domain cache. Interface statistics is available thru this stack index as a key
rather than name. As a result we can have accidental 'not known interface'
errors on quering intrerface stats. The reason is that in the process of
updating domain configuration we drop all devices and then recreate them again
in sdkdom object and domain lock can be dropped meanwhile (to remove networks
for existing bridged interfaces and(or) (re)create new ones). We can fix this
by changing the way we support bridged interfaces or by reordering operations
and changing bridged networks beforehand. Anyway this is better than moving
this API call into 2 group and making it an exclusive job.
As to API calls from group 2, first thread shared sdkdom object needs to be
explained. vz sdk has only one handle for a given domain, thus threads need
exclusive access to operate on it. These calls are fixed to drop and reacquire
domain lock on any lengthy operations - namely waiting the result of async vz
sdk operation. As lock is dropped we need to take extra reference to domain
object if it is not taken already as domain object can be deleted from list
while lock is dropped. As this operations use thread shared sdkdom object, the
simplest way to make calls from group 2 be consistent to each other is to make
them mutually exclusive. This is done by taking/releasing job condition thru
calling correspondent job routine. This approach makes group 1 and group
2 calls consistent to each other too. Not all calls of group 2 change the
domain cache but those that do update it thru prlsdkUpdateDomain which holds
the lock thoughout the update.
API calls from group [2] are easily distinguished too. They use
beginEdit/commit to change domain configuration (vzDomainSetMemoryFlags) or/and
update domain cache from sdkdom at the end of operation (vzDomainSuspend).
There is a known issue however. Frankly speaking it was introduced by ealier
patch '[PATCH 6/9] vz: cleanup loading domain code' from a different series.
The patch significantly reduced amount of time when the driver lock is held when
creating domain from API call or as a result of domain added event from vz sdk.
The problem is these two paths race on using thread shared sdkdom as we don't
have libvirt domain object and can not lock on it. However this don't
invalidates the patch as we can't use the former approach of preadding domain
into the list as we need name at least and name is not given by event. Anyway
i'm against adding half baked object into the list. Eventually this race can be
fixed by extra measures. As to current situation races with different
configurations are unlikely and race when adding domain thru vz driver and
simultaneous event from vz sdk is not dangerous as configuration is the same.
The last group [3] is API calls that need only sdkdom object to make vz sdk
call and don't change thread shared sdkdom object or domain cache in any way.
For now these are mostly domain snapshot API calls. The changes are similar to
those of group 2 - they add extra reference and drop/reacquire the lock on waiting
vz async call result. One can simply take the immutable sdkdom object from the
cache and drop the lock for the rest of operations but the chosen approach
makes implementation of these API calls somewhat similar to those of from group
2 and thus a bit futureproof. As calls of group 3 don't need vz driver
domain/vz sdk cache in any way, they are consistent with respect to API calls from
groups 1 and 3.
There is another exception. Calls to make-snapshot/revert-to-snapshot/migrate
are moved to group 2. That is they are made mutually exclusive. The reason
is that libvirt API supports control/query only for one job per domain and
these are jobs that are likely to be queried/aborted.
Appendix.
[1] API calls that only query domain cache.
(marked [*] are included for a different reason)
.domainLookupByID = vzDomainLookupByID, /* 0.10.0 */
.domainLookupByUUID = vzDomainLookupByUUID, /* 0.10.0 */
.domainLookupByName = vzDomainLookupByName, /* 0.10.0 */
.domainGetOSType = vzDomainGetOSType, /* 0.10.0 */
.domainGetInfo = vzDomainGetInfo, /* 0.10.0 */
.domainGetState = vzDomainGetState, /* 0.10.0 */
.domainGetXMLDesc = vzDomainGetXMLDesc, /* 0.10.0 */
.domainIsPersistent = vzDomainIsPersistent, /* 0.10.0 */
.domainGetAutostart = vzDomainGetAutostart, /* 0.10.0 */
.domainGetVcpus = vzDomainGetVcpus, /* 1.2.6 */
.domainIsActive = vzDomainIsActive, /* 1.2.10 */
.domainIsUpdated = vzDomainIsUpdated, /* 1.2.21 */
.domainGetVcpusFlags = vzDomainGetVcpusFlags, /* 1.2.21 */
.domainGetMaxVcpus = vzDomainGetMaxVcpus, /* 1.2.21 */
.domainHasManagedSaveImage = vzDomainHasManagedSaveImage, /* 1.2.13 */
.domainGetMaxMemory = vzDomainGetMaxMemory, /* 1.2.15 */
.domainBlockStats = vzDomainBlockStats, /* 1.2.17 */
.domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */
.domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */ [*]
.domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */
.domainMigrateBegin3Params = vzDomainMigrateBegin3Params, /* 1.3.5 */
.domainMigrateConfirm3Params = vzDomainMigrateConfirm3Params, /* 1.3.5 */
[2] API calls that use thread shared sdkdom object
(marked [*] are included for a different reason)
.domainSuspend = vzDomainSuspend, /* 0.10.0 */
.domainResume = vzDomainResume, /* 0.10.0 */
.domainDestroy = vzDomainDestroy, /* 0.10.0 */
.domainShutdown = vzDomainShutdown, /* 0.10.0 */
.domainCreate = vzDomainCreate, /* 0.10.0 */
.domainCreateWithFlags = vzDomainCreateWithFlags, /* 1.2.10 */
.domainReboot = vzDomainReboot, /* 1.3.0 */
.domainDefineXML = vzDomainDefineXML, /* 0.10.0 */
.domainDefineXMLFlags = vzDomainDefineXMLFlags, /* 1.2.12 */ (update part)
.domainUndefine = vzDomainUndefine, /* 1.2.10 */
.domainAttachDevice = vzDomainAttachDevice, /* 1.2.15 */
.domainAttachDeviceFlags = vzDomainAttachDeviceFlags, /* 1.2.15 */
.domainDetachDevice = vzDomainDetachDevice, /* 1.2.15 */
.domainDetachDeviceFlags = vzDomainDetachDeviceFlags, /* 1.2.15 */
.domainSetUserPassword = vzDomainSetUserPassword, /* 1.3.6 */
.domainManagedSave = vzDomainManagedSave, /* 1.2.14 */
.domainSetMemoryFlags = vzDomainSetMemoryFlags, /* 1.3.4 */
.domainSetMemory = vzDomainSetMemory, /* 1.3.4 */
.domainRevertToSnapshot = vzDomainRevertToSnapshot, /* 1.3.5 */ [*]
.domainSnapshotCreateXML = vzDomainSnapshotCreateXML, /* 1.3.5 */ [*]
.domainMigratePerform3Params = vzDomainMigratePerform3Params, /* 1.3.5 */ [*]
.domainUpdateDeviceFlags = vzDomainUpdateDeviceFlags, /* 2.0.0 */
prlsdkHandleVmConfigEvent
[3] API calls that do not use thread shared sdkdom object
.domainManagedSaveRemove = vzDomainManagedSaveRemove, /* 1.2.14 */
.domainSnapshotNum = vzDomainSnapshotNum, /* 1.3.5 */
.domainSnapshotListNames = vzDomainSnapshotListNames, /* 1.3.5 */
.domainListAllSnapshots = vzDomainListAllSnapshots, /* 1.3.5 */
.domainSnapshotGetXMLDesc = vzDomainSnapshotGetXMLDesc, /* 1.3.5 */
.domainSnapshotNumChildren = vzDomainSnapshotNumChildren, /* 1.3.5 */
.domainSnapshotListChildrenNames = vzDomainSnapshotListChildrenNames, /* 1.3.5 */
.domainSnapshotListAllChildren = vzDomainSnapshotListAllChildren, /* 1.3.5 */
.domainSnapshotLookupByName = vzDomainSnapshotLookupByName, /* 1.3.5 */
.domainHasCurrentSnapshot = vzDomainHasCurrentSnapshot, /* 1.3.5 */
.domainSnapshotGetParent = vzDomainSnapshotGetParent, /* 1.3.5 */
.domainSnapshotCurrent = vzDomainSnapshotCurrent, /* 1.3.5 */
.domainSnapshotIsCurrent = vzDomainSnapshotIsCurrent, /* 1.3.5 */
.domainSnapshotHasMetadata = vzDomainSnapshotHasMetadata, /* 1.3.5 */
.domainSnapshotDelete = vzDomainSnapshotDelete, /* 1.3.5 */
[4] Known issues.
1. accidental errors on getting network statistics
2. race with simultaneous use of thread shared domain object on paths
of adding domain thru API and adding domain on vz sdk domain added event.
sdk domain handle is unique per connection so there is
no sense to query it again if we have it in vzDomObjPtr.
Side effect of prlsdkSdkDomainLookupByUUID is refreshing
domain config is of no use too as PrlVm_BeginEdit do it too.
Commit id '5e46d7d6' did not take into account that usage of a luks
volume will require usage of the master key encrypted passphrase for
a QEMU environment. So rather than allow creation of something that
won't be usable, just fail the creation.
Resolves a CI test integration failure with a RHEL6/Centos6 environment.
In order to use a LUKS encrypted device, the design decision was to
generate an encrypted secret based on the master key. However, commit
id 'da86c6c' missed checking for that specifically.
When qemuDomainSecretSetup was implemented, a design decision was made
to "fall back" to a plain text secret setup if the specific cipher was
not available (e.g. virCryptoHaveCipher(VIR_CRYPTO_CIPHER_AES256CBC))
as well as the QEMU_CAPS_OBJECT_SECRET. For the luks encryption setup
there is no fall back to the plaintext secret, thus if that gets set
up by qemuDomainSecretSetup, then we need to fail.
Also, while the qemuxml2argvtest has set the QEMU_CAPS_OBJECT_SECRET
bit, it didn't take into account the second requirement that the
ability to generate the encrypted secret is possible. So modify the
test to not attempt to run the luks-disk if we know we don't have
the encryption algorithm.
Most of the lines we look at are not going to match one of the
driver types contained in $groups_regex.
Move on to the next line if it does not contain any of them early.
This speeds up the script execution by 50%, since this simple regex
does not have any capture groups.