Useful for copying a CPU definition without model related parts (i.e.,
without model name, feature list, vendor).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In case a hypervisor is able to tell us a list of supported CPU models
and whether each CPU models can be used on the current host, we can
propagate this to domain capabilities. This is a better alternative
to calling virConnectCompareCPU for each supported CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Listing all CPU models supported by QEMU in domain capabilities makes
little sense when libvirt will refuse any model it doesn't know about.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some CPU drivers (such as arm) do not provide list of CPUs libvirt
supports and just pass any CPU model from domain XML directly to QEMU.
Such driver need to return models == NULL and success from cpuGetModels.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The x86 CPU driver translated each CPU definition from domain XML into
CPUID data and then back to CPU definition. This effectively sorted the
list of CPU features according to their CPUID values. Since this is
going to change, we need to reorder CPU features in a few test files to
make sure the generated QEMU command lines will not change.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Testing PPC64/AArch64 KVM domains on x86_64 host only works because we
have a lot of bugs in our code. Since this series is going to fix them,
we need to make sure the host architecture matches guest for KVM
domains.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Adding x86 CPU models into a list of supported CPUs for non-x86
architectures is not a very good idea. Each architecture we test needs
to maintain its own list of supported CPU models.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemu_command.c should deal with translating our domain definition into a
QEMU command line and nothing else.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Changing a host architecture or a CPU is not as easy as assigning a new
value to the appropriate element in virCaps since there is a relation
between the CPU and host architecture (we don't really want to test
anything on an AArch64 host with core2duo CPU). This patch introduces
qemuTestSetHostArch and qemuTestSetHostCPU helpers which will make sure
the host architecture matches the host CPU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some parts of qemuCaps depend on guest architecture, machine type, and
possibly other things that we know only once the domain XML has been
parsed. Let's move all these updates into a dedicated function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
testCompareXMLToArgv will soon need to call a few function which are
defined further in the code. Let's move them up a bit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The list of supported CPU models in domain capabilities is stored in
virDomainCapsCPUModels. Let's use the same object for storing CPU models
in QEMU capabilities.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The patch adds <cpu> element to domain capabilities XML:
<cpu>
<mode name='host-passthrough' supported='yes'/>
<mode name='host-model' supported='yes'/>
<mode name='custom' supported='yes'>
<model>Broadwell</model>
<model>Broadwell-noTSX</model>
...
</mode>
</cpu>
Applications can use it to inspect what CPU configuration modes are
supported for a specific combination of domain type, emulator binary,
guest architecture and machine type.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Our internal APIs mostly use virArch rather than strings. Switching
cpuGetModels to virArch will save us from unnecessary conversions in the
future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By default, virt-manager (and likely other libvirt-based apps) sets
the VIR_MIGRATE_PERSIST_DEST flag when invoking the migrate API, which
fails in a Xen setup since the libxl driver does not support the flag.
Persisting a domain is a trivial task in the grand scheme of migration,
so be nice to libvirt apps and add support for VIR_MIGRATE_PERSIST_DEST
in the libxl driver.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
We have a few of senarios that libvirtd would invoke qemuProcessStop
and leave a "shutting down" in /var/log/libvirt/qemu/$DOMAIN.log.
The shutoff reason showing in debug log is also very important
for us to know why VM shutting down in domain log,
as we seldom enable debug log of libvirtd.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1322717
During offline migration, no storage is copied. Nor disks, nor
NVRAM file, nor anything. We use qemu for that and because domain
is not running there's nobody to copy that for us.
We should document this to avoid confusing users.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Calling virDomainGetEmulatorPinInfo on a live VM with automatic NUMA
pinning and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning
data in some cases which is bogus. Use the autoCpuset property only when
called on a live definition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779
Calling virDomainGetVcpuPinInfo on a live VM with automatic NUMA pinning
and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning data
in some cases which is bogus. Use the autoCpuset property only when
called on a live definition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779
Sometimes adding a separate variable to access vm->privateData is not
necessary. Add a macro that will do the typecasting rather than having
to add a temp variable to force the compiler to typecast it.
Old libvirt represents
<graphics type='spice'>
<listen type='none'/>
</graphics>
as
<graphics type='spice' autoport='no'/>
In this mode, QEMU doesn't listen for SPICE connection anywhere and
clients have to use virDomainOpenGraphics* APIs to attach to the domain.
That is, the client has to run on the same host where the domains runs
and it's impossible to tell the client to reconnect to the destination
QEMU during migration (unless there is some kind of proxy on the host).
While current libvirt correctly ignores such graphics devices when
creating graphics migration cookie, old libvirt just sends
<graphics type='spice' port='0' listen='0.0.0.0' tlsPort='-1'/>
in the cookie. After seeing this cookie, we happily would call
client_migrate_info QMP command and wait for SPICE_MIGRATE_COMPLETED
event, which is quite pointless since the doesn't know where to connecti
anyway. We should just ignore such cookies.
https://bugzilla.redhat.com/show_bug.cgi?id=1376083
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Checking if a domain's definition or if it is active before we got a job
is pointless since the domain might have changed in the meantime.
Luckily libvirtd didn't crash when the API tried to talk to an inactive
domain:
debug : qemuDomainObjBeginJobInternal:2914 : Started job: modify
(async=none vm=0x7f8f340140c0 name=ble)
debug : qemuDomainObjEnterMonitorInternal:3137 : Entering monitor
(mon=(nil) vm=0x7f8f340140c0 name=ble)
warning : virObjectLock:319 : Object (nil) ((unknown)) is not a
virObjectLockable instance
debug : qemuMonitorOpenGraphics:3505 : protocol=spice fd=27
fdname=graphicsfd skipauth=1
error : qemuMonitorOpenGraphics:3508 : invalid argument: monitor must
not be NULL
debug : qemuDomainObjExitMonitorInternal:3160 : Exited monitor
(mon=(nil) vm=0x7f8f340140c0 name=ble)
debug : qemuDomainObjEndJob:3068 : Stopping job: modify (async=none
vm=0x7f8f340140c0 name=ble)
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We can receive NULL as sync reply in two situations. First
is garbage sync reply and this situation is handled by
resending sync message. Second is different cases
of rebooting guest, destroing domain etc and we can
give more meaningful error message. Actually we have
this error message in qemuAgentCommand already which checks
for the same sitatuion. AFAIK case with mon->running
is just to be safe on adding some future(?) cases of
returning NULL reply.
We can easily handle receiving garbage on sync. We don't
have to make client deal with this situation. We just
need to resend sync command but this time garbage is
not be possible.
When we wait for sync reply we can receive delayed
reply to syncs or commands that were sent erlier. We can
safely skip them until we receive sync reply with correct id.
There is no much sense report this situation to client.
Actually with a bit of "luck" if we involve client into
this the play can go on forever: send sync 0, receive
sync reply -1, send sync 1, receive reply 0 ...
After sync is sent we can receive garbare and this is not error.
Consider next regular case:
1. libvirtd sends sync
2. qga sends partial sync reply and die
3. libvirtd sends sync
4. qga sends sync reply
5. libvirtd receives garbage
(half of first reply and second reply together)
We should handle this situation as it is recoverable.
Next sync can succeed. Let's report reply is NULL,
it will be converted to the VIR_ERR_AGENT_UNSYNCED
which signals client to retry.
Errors in qemuAgentIOProcessLine stop agent IO processing just
like any regular IO error, however some of current errors
that this functions spawns are false positives. Consider
next case for example:
1. send sync (unsynced state)
2. receive sync reply (sync established)
3. command send, but timeout occured (unsynced state)
4. receive command reply
Last IO triggers error because current code ignores
only delayed syncs when unsynced
We should not treat any delayed reply as error in unsynced
state. Until client and qga are not in sync delayed reply to any
command is possible. msg == NULL is the exact criterion
that we are not in sync.