qemu_command.c should deal with translating our domain definition into a
QEMU command line and nothing else.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The list of supported CPU models in domain capabilities is stored in
virDomainCapsCPUModels. Let's use the same object for storing CPU models
in QEMU capabilities.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Our internal APIs mostly use virArch rather than strings. Switching
cpuGetModels to virArch will save us from unnecessary conversions in the
future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We have a few of senarios that libvirtd would invoke qemuProcessStop
and leave a "shutting down" in /var/log/libvirt/qemu/$DOMAIN.log.
The shutoff reason showing in debug log is also very important
for us to know why VM shutting down in domain log,
as we seldom enable debug log of libvirtd.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Calling virDomainGetEmulatorPinInfo on a live VM with automatic NUMA
pinning and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning
data in some cases which is bogus. Use the autoCpuset property only when
called on a live definition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779
Calling virDomainGetVcpuPinInfo on a live VM with automatic NUMA pinning
and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning data
in some cases which is bogus. Use the autoCpuset property only when
called on a live definition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779
Sometimes adding a separate variable to access vm->privateData is not
necessary. Add a macro that will do the typecasting rather than having
to add a temp variable to force the compiler to typecast it.
Old libvirt represents
<graphics type='spice'>
<listen type='none'/>
</graphics>
as
<graphics type='spice' autoport='no'/>
In this mode, QEMU doesn't listen for SPICE connection anywhere and
clients have to use virDomainOpenGraphics* APIs to attach to the domain.
That is, the client has to run on the same host where the domains runs
and it's impossible to tell the client to reconnect to the destination
QEMU during migration (unless there is some kind of proxy on the host).
While current libvirt correctly ignores such graphics devices when
creating graphics migration cookie, old libvirt just sends
<graphics type='spice' port='0' listen='0.0.0.0' tlsPort='-1'/>
in the cookie. After seeing this cookie, we happily would call
client_migrate_info QMP command and wait for SPICE_MIGRATE_COMPLETED
event, which is quite pointless since the doesn't know where to connecti
anyway. We should just ignore such cookies.
https://bugzilla.redhat.com/show_bug.cgi?id=1376083
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking if a domain's definition or if it is active before we got a job
is pointless since the domain might have changed in the meantime.
Luckily libvirtd didn't crash when the API tried to talk to an inactive
domain:
debug : qemuDomainObjBeginJobInternal:2914 : Started job: modify
(async=none vm=0x7f8f340140c0 name=ble)
debug : qemuDomainObjEnterMonitorInternal:3137 : Entering monitor
(mon=(nil) vm=0x7f8f340140c0 name=ble)
warning : virObjectLock:319 : Object (nil) ((unknown)) is not a
virObjectLockable instance
debug : qemuMonitorOpenGraphics:3505 : protocol=spice fd=27
fdname=graphicsfd skipauth=1
error : qemuMonitorOpenGraphics:3508 : invalid argument: monitor must
not be NULL
debug : qemuDomainObjExitMonitorInternal:3160 : Exited monitor
(mon=(nil) vm=0x7f8f340140c0 name=ble)
debug : qemuDomainObjEndJob:3068 : Stopping job: modify (async=none
vm=0x7f8f340140c0 name=ble)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We can receive NULL as sync reply in two situations. First
is garbage sync reply and this situation is handled by
resending sync message. Second is different cases
of rebooting guest, destroing domain etc and we can
give more meaningful error message. Actually we have
this error message in qemuAgentCommand already which checks
for the same sitatuion. AFAIK case with mon->running
is just to be safe on adding some future(?) cases of
returning NULL reply.
We can easily handle receiving garbage on sync. We don't
have to make client deal with this situation. We just
need to resend sync command but this time garbage is
not be possible.
When we wait for sync reply we can receive delayed
reply to syncs or commands that were sent erlier. We can
safely skip them until we receive sync reply with correct id.
There is no much sense report this situation to client.
Actually with a bit of "luck" if we involve client into
this the play can go on forever: send sync 0, receive
sync reply -1, send sync 1, receive reply 0 ...
After sync is sent we can receive garbare and this is not error.
Consider next regular case:
1. libvirtd sends sync
2. qga sends partial sync reply and die
3. libvirtd sends sync
4. qga sends sync reply
5. libvirtd receives garbage
(half of first reply and second reply together)
We should handle this situation as it is recoverable.
Next sync can succeed. Let's report reply is NULL,
it will be converted to the VIR_ERR_AGENT_UNSYNCED
which signals client to retry.
Errors in qemuAgentIOProcessLine stop agent IO processing just
like any regular IO error, however some of current errors
that this functions spawns are false positives. Consider
next case for example:
1. send sync (unsynced state)
2. receive sync reply (sync established)
3. command send, but timeout occured (unsynced state)
4. receive command reply
Last IO triggers error because current code ignores
only delayed syncs when unsynced
We should not treat any delayed reply as error in unsynced
state. Until client and qga are not in sync delayed reply to any
command is possible. msg == NULL is the exact criterion
that we are not in sync.
Put it into qemuDomainPrepareShmemChardev() so it can be used later.
Also don't fill in the path unless the server option is enabled.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Some checks will need to be performed for newer device types as well, so
let's not duplicate them.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Now that we have two same implementations for getting path for
huge pages backed guest memory, lets merge them into one function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When trying to migrate a huge page enabled guest, I've noticed
the following crash. Apparently, if no specific hugepages are
requested:
<memoryBacking>
<hugepages/>
</memoryBacking>
and there are no hugepages configured on the destination, we try
to dereference a NULL pointer.
Program received signal SIGSEGV, Segmentation fault.
0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447
1447 if (virAsprintf(&ret, "%s/libvirt/qemu", hugepage->mnt_dir) < 0)
(gdb) bt
#0 0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447
#1 0x00007fcc907fb2f5 in qemuGetDefaultHugepath (hugetlbfs=0x0, nhugetlbfs=0) at qemu/qemu_conf.c:1466
#2 0x00007fcc907b4afa in qemuBuildMemoryBackendStr (size=4194304, pagesize=0, guestNode=0, userNodeset=0x0, autoNodeset=0x0, def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, backendType=0x7fcc95087228, backendProps=0x7fcc95087218,
force=false) at qemu/qemu_command.c:3297
#3 0x00007fcc907b4f91 in qemuBuildMemoryCellBackendStr (def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, cell=0, auto_nodeset=0x0, backendStr=0x7fcc70020360) at qemu/qemu_command.c:3413
#4 0x00007fcc907c0406 in qemuBuildNumaArgStr (cfg=0x7fcc5c011800, def=0x7fcc70019070, cmd=0x7fcc700040c0, qemuCaps=0x7fcc70004000, auto_nodeset=0x0) at qemu/qemu_command.c:7470
#5 0x00007fcc907c5fdf in qemuBuildCommandLine (driver=0x7fcc5c07b8a0, logManager=0x7fcc70003c00, def=0x7fcc70019070, monitor_chr=0x7fcc70004bb0, monitor_json=true, qemuCaps=0x7fcc70004000, migrateURI=0x7fcc700199c0 "defer", snapshot=0x0,
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, standalone=false, enableFips=false, nodeset=0x0, nnicindexes=0x7fcc95087498, nicindexes=0x7fcc950874a0, domainLibDir=0x7fcc700047c0 "/var/lib/libvirt/qemu/domain-1-fedora") at qemu/qemu_command.c:9547
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Both qemu monitor and agent print the same
log on HUANGUP event, which would be confusing
when reading libvirtd log.
This patch will give a different log message to them.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Most of QEMU's PCI display device models, such as:
libvirt video/model/@type QEMU -device
------------------------- ------------
cirrus cirrus-vga
vga VGA
qxl qxl-vga
virtio virtio-vga
come with a linear framebuffer (sometimes called "VGA compatibility
framebuffer"). This linear framebuffer lives in one of the PCI device's
MMIO BARs, and allows guest code (primarily: firmware drivers, and
non-accelerated OS drivers) to display graphics with direct memory access.
Due to architectural reasons on aarch64/KVM hosts, this kind of
framebuffer doesn't / can't work in
qemu-system-(arm|aarch64) -M virt
machines. Cache coherency issues guarantee a corrupted / unusable display.
The problem has been researched by several people, including kvm-arm
maintainers, and it's been decided that the best way (practically the only
way) to have boot time graphics for such guests is to consolidate on
QEMU's "virtio-gpu-pci" device.
>From <https://bugzilla.redhat.com/show_bug.cgi?id=1195176>, libvirt
supports
<devices>
<video>
<model type='virtio'/>
</video>
</devices>
but libvirt unconditionally maps @type='virtio' to QEMU's "virtio-vga"
device model. (See the qemuBuildDeviceVideoStr() function and the
"qemuDeviceVideo" enum impl.)
According to the above, this is not right for the "virt" machine type; the
qemu-system-(arm|aarch64) binaries don't even recognize the "virtio-vga"
device model (justifiedly). Whereas "virtio-gpu-pci", which is a pure
virtio device without a compatibility framebuffer, is available, and works
fine.
(The ArmVirtQemu ("AAVMF") platform of edk2 -- that is, the UEFI firmware
for "virt" -- supports "virtio-gpu-pci", as of upstream commit
3ef3209d3028. See
<https://tianocore.acgmultimedia.com/show_bug.cgi?id=66>.)
Override the default mapping of "virtio", from "virtio-vga" to
"virtio-gpu-pci", if qemuDomainMachineIsVirt() evaluates to true.
Cc: Andrea Bolognani <abologna@redhat.com>
Cc: Drew Jones <drjones@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Martin Kletzander <mkletzan@redhat.com>
Suggested-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1372901
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Martin Kletzander <mkletzan@redhat.com>
Use the state information (online, hotpluggable) provided by the monitor
code rather than trying to infer it. This fixes an issue where on
architectures that require hotplug of multiple threads at once the
sub-cores would get updated as offline on daemon restart thus creating
an invalid configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375783
Return whether a vcpu entry is hotpluggable or online so that upper
layers don't have to infer the information from other data.
Advantage is that this code can be tested by unit tests.
The algorithm that matches data from query-cpus and
query-hotpluggable-cpus is quite complex. Start using descriptive
iterator names to avoid confusion.
https://bugzilla.redhat.com/show_bug.cgi?id=1372613
Apparently, some management applications use the following code
pattern when waiting for a block job to finish:
while (1) {
virDomainGetBlockJobInfo(dom, disk, info, flags);
if (info.cur == info.end)
break;
sleep(1);
}
Problem with this approach is in its corner cases. In case of
QEMU, libvirt merely pass what has been reported on the monitor.
However, if the block job hasn't started yet, qemu reports cur ==
end == 0 which tricks mgmt apps into thinking job is complete.
The solution is to mangle cur/end values as described here [1].
1: https://www.redhat.com/archives/libvir-list/2016-September/msg00017.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though we merely just pass to users whatever qemu provided
on the monitor, we still do some translation. For instance we
turn bytes into mebibytes, or fix job type if needed. However, in
the future there is more fixing to be done so this code deserves
its own function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Name it virNumaGetHostMemoryNodeset and return only NUMA nodes which
have memory installed. This is necessary as the kernel is not very happy
to set the memory cgroup setting for nodes which do not have any memory.
This would break vcpu hotplug with following message on such
configruation:
Invalid value '0,8' for 'cpuset.mems': Invalid argument
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375268
virQEMUDriverConfigNew() always initializes the bitmap in its
cgroupControllers member to -1 (i.e. all 1's).
Prior to commit a9331394, if qemu.conf had a line with
"cgroup_controllers", cgroupControllers would get reset to 0 before
going through a loop setting a bit for each named cgroup controller.
commit a9331394 left out the "reset to 0" part, so cgroupControllers
would always be -1; if you didn't want a controller included, there
was no longer a way to make that happen.
This was discovered by users who were using qemu commandline
passthrough to use the "input-linux" method of directing
keyboard/mouse input to a virtual machine:
https://www.redhat.com/archives/vfio-users/2016-April/msg00105.html
Here's the first report I found of the problem encountered after
upgrading libvirt beyond v2.0.0:
https://www.redhat.com/archives/vfio-users/2016-August/msg00053.html
Thanks to sL1pKn07 SpinFlo <sl1pkn07@gmail.com> for bringing the
problem up in IRC, and then taking the time to do a git bisect and
find the patch that started the problem.
previous commit:
commit 2c3223785c
Author: John Ferlan <jferlan@redhat.com>
Date: Mon Jun 13 12:30:34 2016 -0400
qemu: Add the ability to hotplug the TLS X.509 environment
added a parameter "bool listen" in some methods. This
unfortunately clashes with the listen() method, causing
compile failures on certain platforms (RHEL-6 for example)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When migration fails, we need to poke QEMU monitor to check for a reason
of the failure. We did this using query-migrate QMP command, which is
not supposed to return any meaningful result on the destination side.
Thus if the monitor was still functional when we detected the migration
failure, parsing the answer from query-migrate always failed with the
following error message:
"info migration reply was missing return status"
This irrelevant message was then used as the reason for the migration
failure replacing any message we might have had.
Let's use harmless query-status for poking the monitor to make sure we
only get an error if the monitor connection is broken.
https://bugzilla.redhat.com/show_bug.cgi?id=1374613
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Qemu always opens the tray if forced to. Skip the waiting step in such
case.
This also helps if qemu does not report the tray change event when
opening the cdrom forcibly (the documentation says that the event will
not be sent although qemu in fact does trigger it even if @force is
selceted).
This is a workaround for a qemu issue where qemu does not send the tray
change event in some cases (after migration with empty closed locked
drive) and thus renders the cdrom useless from libvirt's point of view.
Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1368368
When a source image is dropped when missing due to startup policy the
policy needs to be cleared since it was relevant only for the given
storage source. New sources need to update it if needed.
Just like in the previous commit, teach qemu driver to detect
whether qemu supports this configuration knob or not.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If the incoming XML defined a path to a TLS X.509 certificate environment,
add the necessary 'tls-creds-x509' object to the VIR_DOMAIN_CHR_TYPE_TCP
character device.
Likewise, if the environment exists the hot unplug needs adjustment as
well. Note that all the return ret were changed to goto cleanup since
the cfg needs to be unref'd
Signed-off-by: John Ferlan <jferlan@redhat.com>
When building a chardev device string for tcp, add the necessary pieces to
access provide the TLS X.509 path to qemu. This includes generating the
'tls-creds-x509' object and then adding the 'tls-creds' parameter to the
VIR_DOMAIN_CHR_TYPE_TCP command line.
Finally add the tests for the qemu command line. This test will make use
of the "new(ish)" /etc/pki/qemu setting for a TLS certificate environment
by *not* "resetting" the chardevTLSx509certdir prior to running the test.
Also use the default "verify" option (which is "no").
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add a new TLS X.509 certificate type - "chardev". This will handle the
creation of a TLS certificate capability (and possibly repository) for
properly configured character device TCP backends.
Unlike the vnc and spice there is no "listen" or "passwd" associated. The
credentials eventually will be handled via a libvirt secret provided to
a specific backend.
Make use of the default verify option as well.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than specify perhaps multiple TLS X.509 certificate directories,
let's create a "default" directory which can then be used if the service
(e.g. for now vnc and spice) does not supply a default directory.
Since the default for vnc and spice may have existed before without being
supplied, the default check will first check if the service specific path
exists and if so, set the cfg entry to that; otherwise, the default will
be set to the (now) new defaultTLSx509certdir.
Additionally add a "default_tls_x509_verify" entry which can also be used
to force the peer verification option (for vnc it's a x509verify option).
Add/alter the macro for the option being found in the config file to accept
the default value.
Signed-off-by: John Ferlan <jferlan@redhat.com>