libvirt/tests/qemuxml2argvdata/os-firmware-efi-secboot.x86_64-latest.args

58 lines
2.3 KiB
Plaintext
Raw Normal View History

LC_ALL=C \
PATH=/bin \
HOME=/tmp/lib/domain--1-fedora \
USER=test \
LOGNAME=test \
XDG_DATA_HOME=/tmp/lib/domain--1-fedora/.local/share \
XDG_CACHE_HOME=/tmp/lib/domain--1-fedora/.cache \
XDG_CONFIG_HOME=/tmp/lib/domain--1-fedora/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=fedora,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,\
file=/tmp/lib/domain--1-fedora/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.secboot.fd",\
"node-name":"libvirt-pflash0-storage","auto-read-only":true,\
"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,\
"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file",\
"filename":"/var/lib/libvirt/qemu/nvram/fedora_VARS.fd",\
"node-name":"libvirt-pflash1-storage","auto-read-only":true,\
"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,\
"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-4.0,accel=kvm,usb=off,smm=on,dump-guest-core=off,\
pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,\
memory-backend=pc.ram \
qemu: Store default CPU in domain XML When starting a domain without a CPU model specified in the domain XML, QEMU will choose a default one. Which is fine unless the domain gets migrated to another host because libvirt doesn't perform any CPU ABI checks and the virtual CPU provided by QEMU on the destination host can differ from the one on the source host. With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for a particular machine type and store it in the domain XML. This way the chosen CPU model is more visible to users and libvirt will make sure the guest will see the exact same CPU after migration. Architecture specific notes - aarch64: We only set the default CPU for TCG domains as KVM requires explicit "-cpu host" to work. - ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU, we will translate the default model to the model corresponding to the host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.). This is not a problem as the corresponding CPU model is in fact an alias for "host". This is probably not ideal, but it's not wrong and the default virtual CPU configured by libvirt is the same QEMU would use. TCG uses various CPU models depending on machine type and its version. - s390x: The default CPU for KVM is "host" while TCG defaults to "qemu". - x86_64: The default CPU model (qemu64) is not runnable on any host with KVM, but QEMU just disables unavailable features and starts happily. https://bugzilla.redhat.com/show_bug.cgi?id=1598151 https://bugzilla.redhat.com/show_bug.cgi?id=1598162 Signed-off-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
2019-09-26 16:42:02 +00:00
-cpu qemu64 \
-global driver=cfi.pflash01,property=secure,value=on \
-m 8 \
-object memory-backend-ram,id=pc.ram,size=8388608 \
-overcommit mem-lock=off \
-smp 1,sockets=1,cores=1,threads=1 \
-uuid 63840878-0deb-4095-97e6-fc444d9bc9fa \
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=1729,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc \
-no-shutdown \
-global ICH9-LPC.disable_s3=0 \
-global ICH9-LPC.disable_s4=1 \
-boot menu=on,strict=on \
-device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e \
-device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 \
-device ioh3420,port=0x8,chassis=3,id=pci.3,bus=pcie.0,addr=0x1 \
-device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x1d.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,\
addr=0x1d \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x1d.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x1d.0x2 \
-device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x1 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,\
resourcecontrol=deny \
-msg timestamp=on