If the q35 specific disable s3/s4 setting isn't supported, fallback to
specifying the PIIX setting, which is the previous behavior. It doesn't
have any effect, but qemu will just warn about it rather than error:
qemu-system-x86_64: Warning: global PIIX4_PM.disable_s3=1 not used
qemu-system-x86_64: Warning: global PIIX4_PM.disable_s4=1 not used
Since it doesn't error, I don't think we should either, since there
may be configs in the wild that already have q35 + disable_s3/4 (via
virt-manager)
This function may be called with @dconnuri == NULL, e.g. from
virDomainMigrateToURI3() if the flags are missing
VIR_MIGRATE_PEER2PEER flag. Moreover, all later functions called
from here do wrap it into NULLSTR() so why not do the same here?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The condition was checking for UHCI (and OHCI for ppc64) availability so
that it can specify the proper device instead of legacy usb. However,
for ppc64, we don't need to check both OHCI and UHCI, but only OHCI as
that is the legacy default. The condition is so big that it was just a
matter of time when someone will make a mistake there, so let's use more
lines so that it is visible what the condition checks for.
This fixes usage of -device instead of -usb for ppc64 that supports
pci-usb-ohci and does not support piix3-usb-uhci.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1297020
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
memory_dirty_rate corresponds to dirty-pages-rate in QEMU and
memory_iteration is what QEMU reports in dirty-sync-count.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The structure actually contains migration statistics rather than just
the status as the name suggests. Renaming it as
qemuMonitorMigrationStats removes the confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
A migration is in "setup" state after it was "inactive" and before it
becomes "active". Let's reflect this in our migration status enum.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
My commit 674afcb09e3d33500cfbbcf870ebf92cb99ecfa3 moved computing the
default listen address from qemuMigrationPrepareAny to
qemuMigrationPrepareIncoming. However, I didn't notice listenAddress was
later passed to qemuMigrationStartNBDServer. Thus, it would be called
with the original value of listenAddress (NULL).
Let's add the updated listen address to qemuProcessIncomingDef and use
it when starting NBD servers.
Reported-by: Michael Chapman <mike@very.puzzling.org>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If user defines a virtio channel with UNIX socket backend and doesn't
care about the path for the socket (e.g. qemu-agent channel), we still
generate it into the persistent XML. Moreover when then user renames
the domain, due to its persistent socket path saved into the per-domain
directory, it will not start. So let's forget about old generated paths
and also stop putting them into the persistent definition.
https://bugzilla.redhat.com/show_bug.cgi?id=1278068
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Just recently, qemu forbade specifying format for sourceless
disks (qemu commit 39c4ae941ed992a3bb5). It kind of makes sense.
If there's no file to open, why specify its format. Anyway, I
have a domain like this:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hda' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
and obviously I am unable to start it. Therefore, a fix on our
side is needed too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For completeness, use the VIR_TRISTATE_SWITCH_ABSENT for data.file.append
comparisons. Commit ids '70ffa02f' and '53a15aed' just went with the non
zero comparison.
While reviewing 1b43885d1784640 I've noticed a virReportError()
followed by a goto endjob; without setting the correct return
value. Problem is, if block job is so fast that it's bandwidth
does not fit into ulong, an error is reported. However, by that
time @ret is already set to 1 which means success. Since the
scenario can be hardly considered successful, we should return a
value meaning error.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Valgrind complained:
==18990== 20 (16 direct, 4 indirect) bytes in 1 blocks are definitely lost in loss record 188 of 996
==18990== at 0x4A057BB: calloc (vg_replace_malloc.c:593)
==18990== by 0x5292E9B: virAllocN (viralloc.c:191)
==18990== by 0x2221E731: qemuMigrationCookieXMLParseStr (qemu_migration.c:1012)
==18990== by 0x2221F390: qemuMigrationEatCookie (qemu_migration.c:1413)
==18990== by 0x222228CE: qemuMigrationPrepareAny (qemu_migration.c:3463)
==18990== by 0x22224121: qemuMigrationPrepareDirect (qemu_migration.c:3865)
==18990== by 0x22251C25: qemuDomainMigratePrepare3Params (qemu_driver.c:12414)
==18990== by 0x5389EE0: virDomainMigratePrepare3Params (libvirt-domain.c:5107)
==18990== by 0x1278DB: remoteDispatchDomainMigratePrepare3ParamsHelper (remote.c:5425)
==18990== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==18990== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
==18990== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156)
==18990==
==18990== 20 (16 direct, 4 indirect) bytes in 1 blocks are definitely lost in loss record 189 of 996
==18990== at 0x4A057BB: calloc (vg_replace_malloc.c:593)
==18990== by 0x5292E9B: virAllocN (viralloc.c:191)
==18990== by 0x2221E731: qemuMigrationCookieXMLParseStr (qemu_migration.c:1012)
==18990== by 0x2221F390: qemuMigrationEatCookie (qemu_migration.c:1413)
==18990== by 0x222249D2: qemuMigrationRun (qemu_migration.c:4395)
==18990== by 0x22226365: doNativeMigrate (qemu_migration.c:4693)
==18990== by 0x22228E45: qemuMigrationPerform (qemu_migration.c:5553)
==18990== by 0x2225144B: qemuDomainMigratePerform3Params (qemu_driver.c:12621)
==18990== by 0x539F5D8: virDomainMigratePerform3Params (libvirt-domain.c:5206)
==18990== by 0x127305: remoteDispatchDomainMigratePerform3ParamsHelper (remote.c:5557)
==18990== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==18990== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
If we're replacing the NBD data, it's simplest to free the old object
(including the disk list) and allocate a new one.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Valgrind complained:
==23975== Conditional jump or move depends on uninitialised value(s)
==23975== at 0x22255FA6: qemuDomainGetBlockJobInfo (qemu_driver.c:16538)
==23975== by 0x538E97C: virDomainGetBlockJobInfo (libvirt-domain.c:9685)
==23975== by 0x12F740: remoteDispatchDomainGetBlockJobInfoHelper (remote.c:2834)
==23975== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==23975== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
==23975== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156)
==23975== by 0x52F515B: virThreadPoolWorker (virthreadpool.c:145)
==23975== by 0x52F4668: virThreadHelper (virthread.c:206)
==23975== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
==23975== by 0x82BE93C: clone (in /lib64/libc-2.12.so)
==23975==
==23975== Conditional jump or move depends on uninitialised value(s)
==23975== at 0x22255FB4: qemuDomainGetBlockJobInfo (qemu_driver.c:16542)
==23975== by 0x538E97C: virDomainGetBlockJobInfo (libvirt-domain.c:9685)
==23975== by 0x12F740: remoteDispatchDomainGetBlockJobInfoHelper (remote.c:2834)
==23975== by 0x53FF287: virNetServerProgramDispatch (virnetserverprogram.c:437)
==23975== by 0x540523D: virNetServerProcessMsg (virnetserver.c:135)
==23975== by 0x54052C7: virNetServerHandleJob (virnetserver.c:156)
==23975== by 0x52F515B: virThreadPoolWorker (virthreadpool.c:145)
==23975== by 0x52F4668: virThreadHelper (virthread.c:206)
==23975== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
==23975== by 0x82BE93C: clone (in /lib64/libc-2.12.so)
If no matching block job is found, qemuMonitorGetBlockJobInfo returns 0
and we should not write anything to the caller-supplied
virDomainBlockJobInfo pointer.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The manpage for sysconf() suggest including unistd.h as the
function is declared there. Even though we are not hitting any
compile issues currently, let's include the correct header file
instead of relying on some hidden include chain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
By default, QEMU truncates serial file on open. Sometimes, it could be weird -
for example, when we are trying to investigate some event, which occured several
restarts ago. This patch adds an ability to preserve previous content.
Signed-off-by: Dmitry Mishin <dim@virtuozzo.com>
This replaces the virPCIKnownStubs string array that was used
internally for stub driver validation.
Advantages:
* possible values are well-defined
* typos in driver names will be detected at compile time
* avoids having several copies of the same string around
* no error checking required when setting / getting value
The names used mirror those in the
virDomainHostdevSubsysPCIBackendType enumeration.
We only support hotplugging SCSI controllers.
The USB and virtio-serial related code was never reachable because
this function was only called for VIR_DOMAIN_CONTROLLER_TYPE_SCSI
controllers.
This reverts commit ee0d97a and parts of commits 16db8d2
and d6d54cd1.
This function calls qemuDomainAttachControllerDevice for SCSI
controllers and reports an error for all other controllers.
Move the error inside qemuDomainAttachControllerDevice and delete this
wrapper.
A closer review of the code shows that for the transition from paused to
running which was supposed to emit the VIR_DOMAIN_EVENT_RESUMED - no event
would be generated. Rather the event is generated when going from running
to running.
Following the 'was_running' boolean shows it is set when the domain obj
is active and the domain obj state is VIR_DOMAIN_RUNNING. So rather than
using was_running to generate the RESUMED event, use !was_running
When the function changes the memory lock limit for the first time,
it will retrieve the current value and store it inside the
virDomainObj for the domain.
When the function is called again, if memory locking is no longer
needed, it will be able to restore the memory locking limit to its
original value.
We increase the limit before plugging in a PCI hostdev or a memory
module because some memory might need to be locked due to eg. VFIO.
Of course we should do the opposite after unplugging a device: this
was already the case for memory modules, but not for PCI hostdevs.
In commit 686eb7a24f7d, the break was not considered part of the
condition, hence breaking after first node when searching.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
when appropriate, of course. If the config for a domain specifies boot
order with <boot dev='blah'/> elements, e.g.:
<os>
...
<boot dev='hd'/>
<boot dev='network'/>
</os>
Then the first disk device in the config will have ",bootindex=1"
appended to its qemu commandline -device options, and the first (and
*only* the first) network interface device will get ",bootindex=2".
However, if the first network interface device is a "hostdev" device
(an SRIOV Virtual Function (VF) being assigned to the domain with
vfio), then the bootindex option will *not* be appended. This happens
because the bootindex=n option corresponding to the order of "<boot
dev='network'/>" is added to the -device for the first network device
when network device commandline args are constructed, but if it's a
hostdev network device, its commandline arg is instead constructed in
the loop for hostdevs.
This patch fixes that omission by noticing (in bootHostdevNet) if the
first network device was a hostdev, and if so passing on the proper
bootindex to the commandline generator for hostdev devices - the
result is that ",bootindex=2" will be properly appended to the first
"network" device in the config even if it is really a hostdev
(including if it is assigned from a libvirt network pool). (note that
this is only the case if there is no <bootmenu enabled='yes'/> element
in the config ("-boot menu-on" in qemu) , since the two are mutually
exclusive - when the bootmenu is enabled, the individual per-device
bootindex options can't be used by qemu, and we revert to using "-boot
order=xyz" instead).
If a greater level of control over boot order is desired (e.g., more
than one network device should be tried, or a network device other
than the first one encountered in the config), then <boot
dev='network'/> in the <os> element should not be used; instead, the
individual device elements in the config should be given a "<boot
order='n'/>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278421
Commit 256496e1 introduced a detection if "is locked" in error replay
from qemu monitor. Commit c4073657 fixed a memory leak, but it was
pointed out by Peter, that this could be done cleaner without
stringifing the replay.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The return value of virJSONValueToString() should be freed when
no longer needed. This is not the case after 256496e1.
==26902== 138 bytes in 2 blocks are definitely lost in loss record 1,051 of 1,239
==26902== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==26902== by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
==26902== by 0x552BAD9: virStrdup (virstring.c:726)
==26902== by 0x54F60A7: virJSONValueToString (virjson.c:1790)
==26902== by 0x1DF6EBB9: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2225)
==26902== by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
==26902== by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
==26902== by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
==26902== by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
==26902== by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
==26902== by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
==26902== by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
==26902== 106 bytes in 1 blocks are definitely lost in loss record 1,033 of 1,239
==26902== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==26902== by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
==26902== by 0x552BAD9: virStrdup (virstring.c:726)
==26902== by 0x54F60A7: virJSONValueToString (virjson.c:1790)
==26902== by 0x1DF6EC0C: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2227)
==26902== by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
==26902== by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
==26902== by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
==26902== by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
==26902== by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
==26902== by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
==26902== by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Moving tasks to cgroups implied sched_setaffinity. Changing the cpus in
a set implies the same for all tasks in the group.
The old code put the the thread into the cpuset inherited from the
machine cgroup, which allowed it to run outside of vcpupin for a short
while.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
The machine cgroup is a superset, a parent to the emulator and vcpuX
cgroups. The parent cgroup should never have any tasks directly in it.
In fact the parent cpuset might contain way more cpus than the sum of
emulatorpin and vcpupins. So putting tasks in the superset will allow
them to run outside of <cputune>.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
virCgroupNewMachine used to add the pidleader to the newly created
machine cgroup. Do not do this implicit anymore.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
When user configures vhost-user interface and forgets to also configure
any shared memory, the search for the root cause of non-operational
interface might take unpleasantly long time. Let's enhance user
experience by emitting a warning in the logs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266982
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1240439
Ta-da! Now that we know how to open a macvtap device multiple
times, we can finally enable the multiqueue feature. Everything
else is already prepared (e.g. command line generation) from the
previous iteration where the feature was implemented for
TUN/TAP devices.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For the multiqueue on macvtaps we are going to need to open
the device multiple times. Currently, this is not supported.
Rework the function, so that upper layers can be reworked too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So yet again one of integer arguments that we use as a boolean.
Since the argument count of the function is unbearably long
enough, lets turn those booleans into flags.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>