Compare commits

...

32 Commits

Author SHA1 Message Date
Lukas Greve
8266670f1d move scripts 2025-11-02 11:22:55 +01:00
Lukas Greve
7b67ff510b add aider 2025-11-02 11:22:23 +01:00
Lukas Greve
4aa7dafe26 make it a markdown 2025-11-02 11:22:13 +01:00
Lukas Greve
ebaf28040c Store VMs images in a more persistent location 2025-11-01 10:55:24 +01:00
Lukas Greve
d6e11a3e63 add logic for updating the image location for Fedora Rawhide. Does not appear to be working 2025-10-26 12:13:29 +01:00
Lukas Greve
f27930a294 Add logic to fetch latest Fedora Cloud Rawhide image 2025-10-26 12:02:16 +01:00
Lukas Greve
1b757a98eb Add support for Fedora Cloud Rawhide 2025-10-25 18:59:53 +02:00
Lukas Greve
5f5119db1d add support for CentOS Stream 10 2025-10-25 18:59:40 +02:00
Lukas Greve
f541ae77ce remove whitespace 2025-10-25 18:59:19 +02:00
Lukas Greve
cdfae5661e add CentOS Stream 10 2025-10-25 13:49:11 +02:00
Lukas Greve
ab633d601d add centos stream and refactor script 2025-10-25 13:48:53 +02:00
Lukas Greve
4272105dd2 new whiteline due to the update_image_locations script 2025-10-22 22:30:09 +02:00
Lukas Greve
15dff2b43e updated version of the script, fix bug that was deleted bracket 2025-10-22 22:29:20 +02:00
Lukas Greve
5985b0f353 remove whitespace 2025-10-22 22:15:42 +02:00
Lukas Greve
a5ae234469 add script to update image location 2025-10-22 22:13:44 +02:00
Lukas Greve
94f2fd43ed add script to download OS images 2025-10-22 22:13:29 +02:00
Lukas Greve
ca2e84496a Simplify the logic so that when uefi_firmware is not set to true (or the line does not appear in main.tf), will default to BIOS 2025-10-20 20:27:35 +02:00
Lukas Greve
87dc196f77 rename folder 2025-10-20 17:25:35 +02:00
Lukas Greve
8271e05336 add logic to automatically detect firmware irrespective of the Linux distribution 2025-10-20 11:28:28 +02:00
Lukas Greve
7317e390c9 update main.tf to match simpler UEFI firmware logic 2025-10-20 11:27:00 +02:00
Lukas Greve
92404ccc34 aider not used so can be ignored 2025-10-20 11:26:35 +02:00
Lukas Greve
3c8120a733 update README with limitations section 2025-10-20 10:29:26 +02:00
Lukas Greve
b2b4eb9d01 add support for Rocky Linux 10 2025-10-19 20:31:29 +02:00
Lukas Greve
b2f51f6d63 add ability to remove ssh keys 2025-10-19 20:27:50 +02:00
Lukas Greve
bd10329712 add support for OpenSUSE Tumbleweed 2025-10-19 20:13:33 +02:00
Lukas Greve
79f8d5f5a5 add support for debian 13 2025-10-19 20:13:14 +02:00
Lukas Greve
f146540ede add location of uefi firmware relative to Fedora, which is now the default 2025-10-18 20:13:19 +02:00
Lukas Greve
369ce1b88d comment fix 2025-10-18 20:12:58 +02:00
Lukas Greve
1827b122be revert ot remote image location 2025-10-18 13:27:21 +02:00
Lukas Greve
6076b096f1 update the README to reflect recent changes
add script to automatically add SSH key pair to main.tf files, for deployments that do require it
2025-10-18 13:19:22 +02:00
Lukas Greve
91e23f0765 move up files to one level and erase default public key 2025-10-18 13:18:32 +02:00
Lukas Greve
f5e85371e4 Move simler example to a new repository 2025-10-18 12:14:55 +02:00
30 changed files with 875 additions and 487 deletions

5
.gitignore vendored
View File

@@ -10,7 +10,4 @@ terraform.tfvars.example
# Terraform plan and output files
*.tfplan
*.tfout
# Aider files
*.aider*
.aider*

View File

336
README.md
View File

@@ -11,16 +11,16 @@ The folder *multiple* contains two subfolders, one with shared modules and the o
The idea is to reuse modules across multiple virtual machines and operating systems.
```
./multiple:
.:
environments shared_modules
./multiple/environments:
./environments:
cloud_init.yaml ubuntu-cloud-server-2404-bios
./multiple/environments/ubuntu-cloud-server-2404-bios:
./environments/ubuntu-cloud-server-2404-bios:
ubuntu-cloud-server-2404-bios.tf
./multiple/shared_modules:
./shared_modules:
cloud-init.tf domain.tf network.tf outputs.tf pool.tf provider.tf variables.tf volume.tf
```
@@ -29,320 +29,96 @@ cloud-init.tf domain.tf network.tf outputs.tf pool.tf provider.tf variable
- [QEMU](https://www.qemu.org/)
- [libvirt](https://libvirt.org/)
- [Terraform provider for Libvirt](https://github.com/dmacvicar/terraform-provider-libvirt)
- An SSH key pair to connect to machines that are deployed using cloud-init. See instructions below.
## Assumptions
Your Linux x86_64-based machine has at least 4 GB of available memory and 2 CPUs.
- Your Linux x86_64-based machine has at least 4 GB of available memory and 2 CPUs
## Limitations
- Only a deployment is supported at a time, as some resources are shared. E.g. Ubuntu cannot be deployed alongside Debian.
## How to use it
- Clone this repository
- Go to folder *example*
- Execute the following commands, which will download and install the required Terraform provider if not already present
- Run the following to generate a public key pair
```
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of dmacvicar/libvirt from the dependency lock file
- Using previously-installed dmacvicar/libvirt v0.8.3
Terraform has been successfully initialized!
[...]
$ ssh-keygen -t rsa -b 4096 -f ~/.ssh/terraform_key -C "terraform-deployment"
```
- The following command will plan the deployment, describing actions that will be taken when applied
- Make the script executable
```
$ chmod +x update_ssh_keys.sh
```
- Run the script (it will use terraform_key by default), and will update all `main.tf` file so that they use the previously generated key:
```
$ ./update_ssh_keys.sh
```
> Alternatively, you can use your own public key and update it manually in the `main.tf` deployment file
- Navigate to one of the available deployment
```
$ cd environments/ubuntu-cloud-server-2404-bios/
```
- Initialize your terraform environment
```
$ terraform init
```
- Plan the deployment
```
$ terraform plan
[...]
Terraform will perform the following actions
# A cloud-init ISO disk is created, which provides pre-configured settings and scripts that are applied to a cloud-native disk image during its initial boot. Without it, no user would be created and it would not be possible to log into the virtual machine
+ resource "libvirt_cloudinit_disk" "commoninit" {
+ name = "commoninit.iso"
+ pool = "ubuntu-bios"
[...]
}
# The libvirt domain or virtual machine will be created
+ resource "libvirt_domain" "domain" {
+ cloudinit = (known after apply)
[...]
}
# Here, a libvirt pool to store the virtual machine disk image will be created. It should be possible to use the default one
+ resource "libvirt_pool" "ubuntu-bios" {
[...]
+ name = "ubuntu-bios"
+ type = "dir"
+ target {
+ path = "/tmp/ubuntu-bios"
}
}
# A qcow2 disk volume will be created and stored in the previously created pool, based on a Ubuntu noble (24.04) hosted cloud image
+ resource "libvirt_volume" "ubuntu-qcow2" {
+ format = "qcow2"
[...]
# The plan summaries the action to be taken, which in this case is about creating resources
Plan: 4 to add, 0 to change, 0 to destroy.
```
- The last command will carry out the plan
- Deploy
```
$ terraform apply
[...]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
# The actions are carried out
libvirt_pool.ubuntu-bios: Creating...
libvirt_pool.ubuntu-bios: Creation complete after 0s
[...]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
$ terraform deploy
```
- Identify the created machine
- Identify the name of the machine, which requires elevated privileges
```
$ sudo virsh list --all
Id Name State
---------------------------------------------
10 ubuntu-cloud-server-2404-0 running
# virsh list --all
Id Name State
--------------------------------------------
2 u24-bios-0 running
```
- Determine its IP address
- Fetch IP address
```
$ sudo virsh domifaddr ubuntu-cloud-server-2404-0
# virsh domifaddr u24-bios-0
```
- Connect to the machine with the user `groot`
```
$ ssh groot@10.17.3.107
```
```
Name MAC address Protocol Address
----------------------------------------------------------------
vnet3 52:54:00:e2:51:c0 ipv4 192.168.122.24/24
groot@ubuntu:~$
```
- Connect to the machine
- Logout
```
$ ssh root@192.168.122.24
[...]
# Use the password defined in the cloud-init.cfg file
root@192.168.122.24's password:
Welcome to Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-71-generic x86_64)
[...]
System information as of Tue Aug 26 10:40:49 UTC 2025
System load: 0.0 Processes: 113
Usage of /: 67.4% of 2.35GB Users logged in: 0
Memory usage: 5% IPv4 address for ens3: 192.168.122.24
Swap usage: 0%
$ exit
```
- Exit the virtual machine
```
root@ubuntu$ exit
```
- To destroy the virtual machine, execute the following command
- Destroy the machine
```
$ terraform destroy
[...]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# libvirt_cloudinit_disk.commoninit will be destroyed
- resource "libvirt_cloudinit_disk" "commoninit" {
[...]
# libvirt_domain.domain[0] will be destroyed
- resource "libvirt_domain" "domain" {
- arch = "x86_64" -> null
[...]
}
}
# libvirt_pool.ubuntu2 will be destroyed
- resource "libvirt_pool" "ubuntu2" {
- allocation = 798310400 -> null
- available = 16019255296 -> null
[...]
}
}
# libvirt_volume.ubuntu-qcow2 will be destroyed
- resource "libvirt_volume" "ubuntu-qcow2" {
[...]
}
Plan: 0 to add, 0 to change, 4 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
libvirt_domain.domain[0]: Destroying... [id=611d5ede-e4b4-4ca5-ad83-83030942a6b5]
libvirt_domain.domain[0]: Destruction complete after 0s
libvirt_cloudinit_disk.commoninit: Destroying... [id=/tmp/cluster_storage2/commoninit.iso;5f4e08ef-ad51-484f-a9f2-c926f582974a]
libvirt_volume.ubuntu-qcow2: Destroying... [id=/tmp/cluster_storage2/ubuntu-qcow2]
libvirt_cloudinit_disk.commoninit: Destruction complete after 0s
libvirt_volume.ubuntu-qcow2: Destruction complete after 0s
libvirt_pool.ubuntu2: Destroying... [id=dbd62f8b-5d09-4e96-87e2-88e95c582896]
libvirt_pool.ubuntu2: Destruction complete after 0s
Destroy complete! Resources: 4 destroyed.
```
## Explanations
Let's take a look inside the *ubuntu-cloud-server-2404-bios* folder, which contains two files, *ubuntu-cloud-server-2404-bios.tf* and *cloud_init.cfg*
The first file *ubuntu-cloud-server-2404-bios.tf* contains the main configuration for the Terraform deployment.
- It starts by defining the required Terraform version and provider
```
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
```
- The specific provider is defined here
```
provider "libvirt" {
uri = "qemu:///system"
}
```
> The [connection URI](https://libvirt.org/uri.html#qemu-qemu-and-kvm-uris) of the libvirt instance can be defined. One could for instance specific a libvirt instance that is hosted remotely
- A libvirt pool, to store the virtual machine image, is created:
```
resource "libvirt_pool" "ubuntu-bios" {
name = "ubuntu-bios"
type = "dir"
target {
path = "/tmp/ubuntu-bios"
}
}
```
- The cloud-init user data will be fetched from a specific file whose path has to be declared:
```
data "template_file" "user_data" {
template = file("${path.module}/cloud_init.cfg")
}
```
- The ISO cloud-init disk will be created:
```
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = data.template_file.user_data.rendered
pool = libvirt_pool.ubuntu-bios.name
}
```
- Perhaps the most important, the domain will be created:
> Values can be adjusted, such as memory or vCPU counts. In the examples, multiple virtio-based device hardware are created, such as a virtio-gpu.
```
resource "libvirt_domain" "domain" {
count = 1
name = "ubuntu-cloud-server-2404-${count.index}"
memory = "4092"
vcpu = 2
cloudinit = libvirt_cloudinit_disk.commoninit.id
cpu {
mode = "host-model"
}
disk {
volume_id = libvirt_volume.ubuntu-qcow2.id
}
console {
type = "pty"
target_port = "0"
target_type = "virtio"
}
video {
type = "virtio"
}
tpm {
backend_type = "emulator"
backend_version = "2.0"
}
network_interface {
network_name = "default"
}
}
```
## Resources

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "cent10-bios"
image_location = "https://cloud.centos.org/centos/10-stream/x86_64/images/CentOS-Stream-GenericCloud-x86_64-10-latest.x86_64.qcow2"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "deb-13-bios"
image_location = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-amd64.raw"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "fraw-bios"
image_location = "file:///var/lib/libvirt/images/Fedora-Cloud-Base-Generic-Rawhide-20251024.n.0.x86_64.qcow2"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "f42-bios"
image_location = "https://download.fedoraproject.org/pub/fedora/linux/releases/42/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,23 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "os-tw-uefi"
image_location = "https://download.opensuse.org/tumbleweed/appliances/openSUSE-Tumbleweed-Minimal-VM.x86_64-Cloud.qcow2"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,21 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "phyllome-42-uefi"
image_location = "/var/lib/libvirt/images/virtual-desktop-hypervisor.img"
uefi_firmware = true
}

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "rl-bios"
image_location = "https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,22 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "u24-bios"
image_location = "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
}

View File

@@ -0,0 +1,23 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "u24-uefi"
image_location = "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
ssh_key = "" # please provide a SSH public key
enable_cloudinit = true
uefi_firmware = true
}

View File

@@ -1,18 +0,0 @@
#cloud-config
# vim: syntax=yaml
# examples:
# https://cloudinit.readthedocs.io/en/latest/topics/examples.html
---
ssh_pwauth: true
disable_root: false
chpasswd:
list: |
root:password
expire: false
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false

View File

@@ -1,71 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
resource "libvirt_pool" "ubuntu-bios" {
name = "ubuntu-bios"
type = "dir"
target {
path = "/tmp/ubuntu-bios"
}
}
resource "libvirt_volume" "ubuntu-bios" {
name = "ubuntu-bios-${count.index}"
pool = libvirt_pool.ubuntu-bios.name
source = "/var/lib/libvirt/images/noble-server-cloudimg-amd64.img"
format = "qcow2"
count = 1
}
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = templatefile("${path.module}/cloud_init.yaml", {})
}
resource "libvirt_domain" "domain" {
count = 1
name = "ubuntu-cloud-server-2404-${count.index}"
memory = "4092"
vcpu = 2
cloudinit = libvirt_cloudinit_disk.commoninit.id
cpu {
mode = "host-model"
}
disk {
volume_id = element(libvirt_volume.ubuntu-bios.*.id, count.index)
}
console {
type = "pty"
target_port = "0"
target_type = "virtio"
}
video {
type = "virtio"
}
tpm {
backend_type = "emulator"
backend_version = "2.0"
}
network_interface {
network_name = "default"
}
}

View File

@@ -1,22 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "f42-bios"
image_location = "https://download.fedoraproject.org/pub/fedora/linux/releases/42/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2"
ssh_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDMSuVlvOsMqx9qOrKKB1295FjCf2QhHfR1qola9brGkUcFL9dAztG2qdQnpiuPQ4OJpkedrO3C/ixEw1MLTL8l12SvYy/Q9QFguwylp35Nbw1p8h7jrX1FcNLRYltxkMgVhCs1InT5m0lf56bu1h7JfsMs7Ovsy3lU5OdK4h2MysTSKOLctsE4jDJ+XbJYQzj4rbfB/U7/9ple366cGl6xlaHxVfI4BUFWUOiVU4HWvZjrOM5fqPt+AUFRx1l2D7hLUZgOdVQwgO8GFn0sCyCIw0NCXbDn/H05pvWtTUPnyhj5TiseF8qW1byrrT5G8saxwvx8nbIK2tpPfKFdIiL7aj9bYQdltn1knJtvk3hpTPy4QvAbaoGfnfrPAsyU1A/CTw9SD/idvDT2wt1hVsm8EsnpovF7WT5z22fcgoFLDo+QCQrp7t1Wx0/Djay2nThi3FO3N051y5fQWoKOvTsm+rRhrzpDoc+Wtrtss3ua54qnQxHRx3YC0M5Xl9DINkwrcunbZBhozsDG2DzX9qcyzJsSfm9Zt5yM2lpcq+dGPRO1wedw4ogoOpobRr9Cja9W/lJvxmjgIiHz2HbSFPtk/VGjL6M7aQor/GDNN3ugSsfUoTTmNaS9+lWeg+tQWcFUPhYQtQB4/gHQ2u7+mQ0H3hVybsIKIh5XBpAdHQ7pww=="
enable_cloudinit = true
}

View File

@@ -1,27 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "phyllome-42-uefi"
image_location = "/var/lib/libvirt/images/virtual-desktop-hypervisor.img"
enable_cloudinit = false
# ---- OPTIONAL UEFI SETTINGS ----------------------------------------------
uefi_firmware = "/usr/share/edk2/x64/OVMF_CODE.4m.fd"
uefi_nvram_template = "/usr/share/edk2/x64/OVMF_VARS.4m.fd"
uefi_nvram_file_suffix = "-uefi"
# ----------------------------------------------------------------
}

View File

@@ -1,22 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "u24-bios"
image_location = "/var/lib/libvirt/images/noble-server-cloudimg-amd64.img"
ssh_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDMSuVlvOsMqx9qOrKKB1295FjCf2QhHfR1qola9brGkUcFL9dAztG2qdQnpiuPQ4OJpkedrO3C/ixEw1MLTL8l12SvYy/Q9QFguwylp35Nbw1p8h7jrX1FcNLRYltxkMgVhCs1InT5m0lf56bu1h7JfsMs7Ovsy3lU5OdK4h2MysTSKOLctsE4jDJ+XbJYQzj4rbfB/U7/9ple366cGl6xlaHxVfI4BUFWUOiVU4HWvZjrOM5fqPt+AUFRx1l2D7hLUZgOdVQwgO8GFn0sCyCIw0NCXbDn/H05pvWtTUPnyhj5TiseF8qW1byrrT5G8saxwvx8nbIK2tpPfKFdIiL7aj9bYQdltn1knJtvk3hpTPy4QvAbaoGfnfrPAsyU1A/CTw9SD/idvDT2wt1hVsm8EsnpovF7WT5z22fcgoFLDo+QCQrp7t1Wx0/Djay2nThi3FO3N051y5fQWoKOvTsm+rRhrzpDoc+Wtrtss3ua54qnQxHRx3YC0M5Xl9DINkwrcunbZBhozsDG2DzX9qcyzJsSfm9Zt5yM2lpcq+dGPRO1wedw4ogoOpobRr9Cja9W/lJvxmjgIiHz2HbSFPtk/VGjL6M7aQor/GDNN3ugSsfUoTTmNaS9+lWeg+tQWcFUPhYQtQB4/gHQ2u7+mQ0H3hVybsIKIh5XBpAdHQ7pww=="
enable_cloudinit = true
}

View File

@@ -1,28 +0,0 @@
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
module "shared_modules" {
source = "../../shared_modules"
vm_name = "u24-uefi"
image_location = "/var/lib/libvirt/images/noble-server-cloudimg-amd64.img"
ssh_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDMSuVlvOsMqx9qOrKKB1295FjCf2QhHfR1qola9brGkUcFL9dAztG2qdQnpiuPQ4OJpkedrO3C/ixEw1MLTL8l12SvYy/Q9QFguwylp35Nbw1p8h7jrX1FcNLRYltxkMgVhCs1InT5m0lf56bu1h7JfsMs7Ovsy3lU5OdK4h2MysTSKOLctsE4jDJ+XbJYQzj4rbfB/U7/9ple366cGl6xlaHxVfI4BUFWUOiVU4HWvZjrOM5fqPt+AUFRx1l2D7hLUZgOdVQwgO8GFn0sCyCIw0NCXbDn/H05pvWtTUPnyhj5TiseF8qW1byrrT5G8saxwvx8nbIK2tpPfKFdIiL7aj9bYQdltn1knJtvk3hpTPy4QvAbaoGfnfrPAsyU1A/CTw9SD/idvDT2wt1hVsm8EsnpovF7WT5z22fcgoFLDo+QCQrp7t1Wx0/Djay2nThi3FO3N051y5fQWoKOvTsm+rRhrzpDoc+Wtrtss3ua54qnQxHRx3YC0M5Xl9DINkwrcunbZBhozsDG2DzX9qcyzJsSfm9Zt5yM2lpcq+dGPRO1wedw4ogoOpobRr9Cja9W/lJvxmjgIiHz2HbSFPtk/VGjL6M7aQor/GDNN3ugSsfUoTTmNaS9+lWeg+tQWcFUPhYQtQB4/gHQ2u7+mQ0H3hVybsIKIh5XBpAdHQ7pww=="
enable_cloudinit = true
# ---- OPTIONAL UEFI SETTINGS ----------------------------------------------
uefi_firmware = "/usr/share/edk2/x64/OVMF_CODE.4m.fd"
uefi_nvram_template = "/usr/share/edk2/x64/OVMF_VARS.4m.fd"
uefi_nvram_file_suffix = "-uefi"
# ----------------------------------------------------------------
}

128
scripts/download_images.sh Executable file
View File

@@ -0,0 +1,128 @@
#!/bin/bash
# Function to get latest Fedora Rawhide image URL using a more reliable method
get_fedora_latest_rawhide_url() {
local base_url="https://dl.fedoraproject.org/pub/fedora/linux/development/rawhide/Cloud/x86_64/images/"
# Method 1: Try fetching the latest link from the directory
local temp_dir
temp_dir=$(mktemp -d)
# Download the HTML directory listing
if curl -s -o "$temp_dir/listing.html" "$base_url"; then
# Look for lines with qcow2 files that match our pattern
local latest_file
latest_file=$(grep -i "Fedora-Cloud-Base-Generic-Rawhide.*\.qcow2" "$temp_dir/listing.html" | \
sort -r | head -1 | sed -E 's/.*href="([^"]*)".*/\1/')
if [[ -n "$latest_file" ]]; then
echo "${base_url}${latest_file}"
else
# If we can't find a specific file, try to find any valid Fedora image
local any_file
any_file=$(grep -i "Fedora-Cloud-Base-Generic.*\.qcow2" "$temp_dir/listing.html" | \
head -1 | sed -E 's/.*href="([^"]*)".*/\1/')
if [[ -n "$any_file" ]]; then
echo "${base_url}${any_file}"
else
# Return empty string if we can't find any valid file
echo ""
fi
fi
else
# If network fails, return empty string to skip Fedora download
echo ""
fi
# Cleanup
rm -rf "$temp_dir"
}
# Image URLs with dynamic Fedora URL handling
IMAGES=(
"https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-amd64.raw"
"https://download.fedoraproject.org/pub/fedora/linux/releases/42/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2"
"https://download.opensuse.org/tumbleweed/appliances/openSUSE-Tumbleweed-Minimal-VM.x86_64-Cloud.qcow2"
"https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2"
"https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
"https://cloud.centos.org/centos/10-stream/x86_64/images/CentOS-Stream-GenericCloud-x86_64-10-latest.x86_64.qcow2"
)
# Add Fedora image if we can get a valid URL
FEDORA_URL=$(get_fedora_latest_rawhide_url)
if [[ -n "$FEDORA_URL" ]]; then
IMAGES+=("$FEDORA_URL")
fi
# Target directory
TARGET_DIR="/var/lib/libvirt/images"
# Main script execution
main() {
# Check if we have write permissions to the target directory
if [[ ! -w "$TARGET_DIR" ]]; then
# Check if we're already running as root
if [[ $EUID -ne 0 ]]; then
echo "This script requires write access to $TARGET_DIR"
echo "Re-executing with sudo..."
exec sudo "$0" "$@"
else
echo "Error: Cannot write to $TARGET_DIR even with sudo privileges."
exit 1
fi
fi
# Download all images
echo "Starting download of all images..."
echo ""
local success_count=0
local failure_count=0
for url in "${IMAGES[@]}"; do
# Skip empty URLs
if [[ -z "$url" ]]; then
continue
fi
local filename
filename=$(basename "$url")
local filepath="$TARGET_DIR/$filename"
if [[ -f "$filepath" ]]; then
echo "Image $filename already exists, skipping..."
((success_count++))
continue
fi
echo "Downloading $filename..."
# Use wget with progress and retry options
if ! wget -P "$TARGET_DIR" --progress=bar:force:noscroll -c "$url"; then
echo "Failed to download $filename"
((failure_count++))
else
echo "Download completed: $filename"
((success_count++))
fi
done
# Summary
echo ""
echo "Download summary:"
echo "Successful downloads: $success_count"
echo "Failed downloads: $failure_count"
if [[ $failure_count -gt 0 ]]; then
echo "Some downloads failed. Check above messages for details."
exit 1
else
echo "All images downloaded successfully!"
fi
}
# Run main function if script is executed directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

302
scripts/update_image_locations.sh Executable file
View File

@@ -0,0 +1,302 @@
#!/bin/bash
# Script to detect locally available OS images and update image_location URLs in main.tf files
# This script updates terraform configurations to use local image paths instead of remote URLs
# It also supports reverting back to original remote URLs using hardcoded values
# Function to display usage
usage() {
echo "Usage: $0 [options]"
echo " options:"
echo " -h, --help Display this help message"
echo " -d, --dry-run Show what would be changed without making modifications"
echo " -r, --revert Revert image_location URLs back to original remote URLs"
echo ""
echo "Example:"
echo " $0 # Convert remote URLs to local paths (default)"
echo " $0 -d # Dry run - show what would be updated"
echo " $0 -r # Revert to original remote URLs"
echo " $0 -r -d # Dry run revert mode"
exit 1
}
# Parse command line arguments
DRY_RUN=false
REVERT_MODE=false
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
;;
-d|--dry-run)
DRY_RUN=true
shift
;;
-r|--revert)
REVERT_MODE=true
shift
;;
*)
echo "Unknown option: $1"
usage
;;
esac
done
# Define the directory where images are stored
IMAGE_DIR="/var/lib/libvirt/images"
# Check if we have write permissions to the target directory
if [[ ! -d "$IMAGE_DIR" ]]; then
echo "Error: Directory $IMAGE_DIR does not exist"
exit 1
fi
# Function to get all locally available image files (including Fedora Rawhide)
get_local_images() {
find "$IMAGE_DIR" -maxdepth 1 -type f \( -name "*.qcow2" -o -name "*.raw" -o -name "*.img" \) | \
while read -r image; do
basename "$image"
done | sort
}
# Function to check if a local file matches the pattern for a Fedora Rawhide image
is_fedora_rawhide_image() {
local filename=$1
# Pattern matching for Fedora Rawhide images that contain "Fedora-Cloud-Base-Generic-Rawhide"
if [[ "$filename" =~ ^Fedora-Cloud-Base-Generic-Rawhide.*\.qcow2$ ]]; then
return 0
fi
return 1
}
# Function to get the latest Fedora Rawhide image path from local directory
get_latest_fedora_rawhide_path() {
local latest_file
latest_file=$(find "$IMAGE_DIR" -maxdepth 1 -name "Fedora-Cloud-Base-Generic-Rawhide*.qcow2" -type f \
| sort -r \
| head -1)
if [[ -n "$latest_file" ]]; then
echo "$latest_file"
fi
}
# Function to provide a mapping between local files and their original URLs
create_original_url_mapping() {
# Create a hash-like mapping for known images
cat << 'EOF'
noble-server-cloudimg-amd64.img=https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2=https://download.fedoraproject.org/pub/fedora/linux/releases/42/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-42-1.1.x86_64.qcow2
openSUSE-Tumbleweed-Minimal-VM.x86_64-Cloud.qcow2=https://download.opensuse.org/tumbleweed/appliances/openSUSE-Tumbleweed-Minimal-VM.x86_64-Cloud.qcow2
Rocky-10-GenericCloud-Base.latest.x86_64.qcow2=https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2
debian-13-genericcloud-amd64.raw=https://cloud.debian.org/images/cloud/trixie/latest/debian-13-genericcloud-amd64.raw
CentOS-Stream-GenericCloud-x86_64-10-latest.x86_64.qcow2=https://cloud.centos.org/centos/10-stream/x86_64/images/CentOS-Stream-GenericCloud-x86_64-10-latest.x86_64.qcow2
EOF
}
# Find all main.tf files and process them
MAIN_TF_FILES=$(find . -name "main.tf" -type f)
if [ -z "$MAIN_TF_FILES" ]; then
echo "No main.tf files found!"
exit 1
fi
echo "Found main.tf files:"
echo "$MAIN_TF_FILES"
echo ""
# Process each file
for file in $MAIN_TF_FILES; do
echo "Processing $file..."
# Check if the file contains image_location lines
if ! grep -q "image_location" "$file"; then
echo " No image_location found in $file, skipping..."
continue
fi
if [ "$REVERT_MODE" = true ]; then
# Revert operation: change file:// back to original https:// URLs
temp_file=$(mktemp)
while IFS= read -r line || [[ -n "$line" ]]; do
# Check if the line contains a file:// URL
if [[ "$line" =~ .*image_location.*=.*\"file://(.*?)\".* ]]; then
# Extract local path from the file:// URL
local_file_path="${BASH_REMATCH[1]}"
local_filename=$(basename "$local_file_path")
# Handle Fedora Rawhide images specially
if [[ "$local_filename" =~ ^Fedora-Cloud-Base-Generic-Rawhide.*\.qcow2$ ]]; then
echo " Reverting Fedora Rawhide image: $local_filename"
# For Rawhide, we'll keep the file:// reference but note that it's a special case
if [ "$DRY_RUN" = false ]; then
echo "$line" >> "$temp_file"
else
echo " Would process Fedora Rawhide image: $local_filename (keeping file:// reference)"
echo "$line" >> "$temp_file"
fi
else
# For regular images, try to map back to original URL
# Create mapping for this specific case
mapping=$(create_original_url_mapping)
# Find matching original URL
found_match=false
while IFS= read -r mapping_line; do
if [[ -z "$mapping_line" ]] || [[ "$mapping_line" =~ ^#.*$ ]]; then
continue
fi
file_pattern=$(echo "$mapping_line" | cut -d'=' -f1)
original_url=$(echo "$mapping_line" | cut -d'=' -f2)
if [[ "$file_pattern" == "$local_filename" ]]; then
echo " Found matching original URL: $local_filename"
if [ "$DRY_RUN" = false ]; then
# Use precise string replacement to avoid corrupting the file
new_line="${line/\"file:\/\/$local_file_path\"/\"$original_url\"}"
echo "$new_line" >> "$temp_file"
echo " Reverted to original URL: $original_url"
else
echo " Would revert to: $original_url"
echo "$line" >> "$temp_file"
fi
found_match=true
break
fi
done <<< "$mapping"
if [ "$found_match" = false ]; then
echo " Warning: No matching original URL found for $local_filename"
echo "$line" >> "$temp_file"
fi
fi
else
# Not a line with image_location, just copy as is
echo "$line" >> "$temp_file"
fi
done < "$file"
if [ "$DRY_RUN" = false ]; then
mv "$temp_file" "$file"
else
rm "$temp_file"
fi
else
# Normal operation: convert remote URLs to local paths
temp_file=$(mktemp)
while IFS= read -r line || [[ -n "$line" ]]; do
if [[ "$line" =~ .*image_location.*=.*\"(https://.*)\".* ]]; then
remote_url="${BASH_REMATCH[1]}"
filename=$(basename "$remote_url")
# Check if the local file exists (including Fedora Rawhide cases)
local_path="$IMAGE_DIR/$filename"
# Special handling for Fedora Rawhide - check if it's the right pattern
if [[ "$filename" =~ ^Fedora-Cloud-Base-Generic-Rawhide.*\.qcow2$ ]]; then
# For Fedora Rawhide, we need to be more flexible with matching patterns
echo " Checking Fedora Rawhide pattern for: $filename"
# Find the most recent Fedora image that matches the pattern but has different timestamp
latest_rawhide=$(find "$IMAGE_DIR" -maxdepth 1 -name "Fedora-Cloud-Base-Generic-Rawhide*.qcow2" -type f \
| sort -r \
| head -1)
if [[ -n "$latest_rawhide" ]]; then
echo " Found matching local Fedora Rawhide image: $(basename $latest_rawhide)"
if [ "$DRY_RUN" = false ]; then
new_line="${line/\"$remote_url\"/\"file://$latest_rawhide\"}"
echo "$new_line" >> "$temp_file"
echo " Updated to local file: file://$latest_rawhide"
else
echo " Would update Fedora Rawhide to: file://$latest_rawhide"
echo "$line" >> "$temp_file"
fi
else
# No matching locally - check if we can find a similar pattern
echo " Checking for any Fedora-Cloud-Base-Generic-Rawhide*.qcow2 files..."
# Look for any file with the same prefix but different timestamp
local_candidates=$(find "$IMAGE_DIR" -maxdepth 1 -name "*Fedora-Cloud-Base-Generic-Rawhide*" -type f)
if [[ -n "$local_candidates" ]]; then
most_recent=$(echo "$local_candidates" | sort -r | head -1)
echo " Found matching local Fedora Rawhide image: $(basename $most_recent)"
if [ "$DRY_RUN" = false ]; then
new_line="${line/\"$remote_url\"/\"file://$most_recent\"}"
echo "$new_line" >> "$temp_file"
echo " Updated to local file: file://$most_recent"
else
echo " Would update Fedora Rawhide to: file://$most_recent"
echo "$line" >> "$temp_file"
fi
else
echo " Local Fedora Rawhide image not found, using original URL"
echo "$line" >> "$temp_file"
fi
fi
elif [[ -f "$local_path" ]]; then
echo " Found local image: $filename"
if [ "$DRY_RUN" = false ]; then
# Use precise string replacement to avoid corrupting the file
new_line="${line/\"$remote_url\"/\"file://$local_path\"}"
echo "$new_line" >> "$temp_file"
echo " Updated to: file://$local_path"
else
echo " Would update to: file://$local_path"
echo "$line" >> "$temp_file"
fi
else
echo " Local image not found: $filename"
echo "$line" >> "$temp_file"
fi
else
# Not a line with image_location, just copy as is
echo "$line" >> "$temp_file"
fi
done < "$file"
if [ "$DRY_RUN" = false ]; then
mv "$temp_file" "$file"
else
rm "$temp_file"
fi
fi
done
echo ""
if [ "$DRY_RUN" = false ]; then
if [ "$REVERT_MODE" = true ]; then
echo "Image location URLs have been successfully reverted to original remote URLs!"
else
echo "Image location URLs have been successfully updated in all main.tf files!"
fi
else
echo "Dry run completed - no changes were made."
fi
# Show a summary of what would be changed
echo ""
echo "Summary of local image availability:"
find "$IMAGE_DIR" -maxdepth 1 -type f -name "*.qcow2" -o -name "*.raw" -o -name "*.img" | \
while read -r image; do
filename=$(basename "$image")
echo "$filename"
done
# If nothing was found, show what images are expected
if ! find "$IMAGE_DIR" -maxdepth 1 -type f -name "*.qcow2" -o -name "*.raw" -o -name "*.img" | grep -q .; then
echo " No local images found in $IMAGE_DIR"
echo " Run download_images.sh to download required images."
fi

123
scripts/update_ssh_keys.sh Executable file
View File

@@ -0,0 +1,123 @@
#!/bin/bash
# Script to automatically update SSH keys in all main.tf files
# This script looks for terraform_key (or terraform_key.pub) in ~/.ssh directory
# Function to display usage
usage() {
echo "Usage: $0 [options] [ssh_key_name]"
echo " options:"
echo " -r, --remove Remove SSH key from main.tf files"
echo " -h, --help Display this help message"
echo ""
echo " ssh_key_name: Name of the SSH key pair (default: terraform_key)"
echo ""
echo "Example:"
echo " $0 # Updates with default 'terraform_key'"
echo " $0 my_custom_key # Updates with 'my_custom_key'"
echo " $0 -r # Remove SSH key from files"
echo " $0 -r my_custom_key # Remove SSH key from files"
exit 1
}
# Parse command line arguments
REMOVE_KEY=false
SSH_KEY_NAME="terraform_key"
# Check if any arguments are provided
if [ $# -eq 0 ]; then
# No arguments - use default behavior (update)
:
elif [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
usage
elif [ "$1" = "-r" ] || [ "$1" = "--remove" ]; then
# Remove mode enabled
REMOVE_KEY=true
if [ $# -gt 1 ]; then
SSH_KEY_NAME="$2"
fi
else
# Normal update mode with key name provided as argument
SSH_KEY_NAME="$1"
fi
# Expand the home directory properly
HOME_DIR="${HOME:-/home/$(whoami)}"
SSH_KEY_PATH="$HOME_DIR/.ssh/$SSH_KEY_NAME"
SSH_KEY_PUB_PATH="$HOME_DIR/.ssh/$SSH_KEY_NAME.pub"
# If not removing keys, validate SSH key exists
if [ "$REMOVE_KEY" = false ]; then
# Check if SSH key exists
if [ ! -f "$SSH_KEY_PATH" ] && [ ! -f "$SSH_KEY_PUB_PATH" ]; then
echo "Error: SSH key '$SSH_KEY_NAME' not found in $HOME_DIR/.ssh/"
echo "Please generate your SSH key first:"
echo " ssh-keygen -t rsa -b 4096 -f $HOME_DIR/.ssh/$SSH_KEY_NAME"
exit 1
fi
# Check if public key exists specifically (required for reading)
if [ ! -f "$SSH_KEY_PUB_PATH" ]; then
echo "Error: SSH public key '$SSH_KEY_NAME.pub' not found in $HOME_DIR/.ssh/"
exit 1
fi
# Get the public key content (remove any trailing whitespace)
PUBLIC_KEY=$(cat "$SSH_KEY_PUB_PATH" | tr -d '\n')
# Validate that we got a valid SSH key
if [[ ! "$PUBLIC_KEY" =~ ^ssh-[a-z]+[[:space:]]+[A-Za-z0-9+/]*[=]{0,3} ]]; then
echo "Error: Invalid SSH public key format detected"
exit 1
fi
echo "Found SSH public key:"
echo "$PUBLIC_KEY"
echo ""
fi
# Find all main.tf files and update them
MAIN_TF_FILES=$(find . -name "main.tf" -type f)
if [ -z "$MAIN_TF_FILES" ]; then
echo "No main.tf files found!"
exit 1
fi
echo "Updating SSH key in the following files:"
echo "$MAIN_TF_FILES"
echo ""
# Process each file based on remove mode
for file in $MAIN_TF_FILES; do
if [ "$REMOVE_KEY" = true ]; then
echo "Removing SSH key from $file..."
# Set ssh_key to empty string for idempotent removal
sed -i "s/^[[:space:]]*ssh_key[[:space:]]*=[[:space:]]*\"[^\"]*\"/ ssh_key = \"\"/" "$file"
else
echo "Updating SSH key in $file..."
# Update the ssh_key line with new value
sed -i "s#ssh_key = \".*\"#ssh_key = \"$PUBLIC_KEY\"#g" "$file"
fi
done
# Verify the replacement worked
echo ""
echo "Verification:"
for file in $MAIN_TF_FILES; do
echo "File: $file"
if [ "$REMOVE_KEY" = true ]; then
# Show lines with empty ssh_key values
grep "ssh_key = \"\"" "$file" | head -1
else
# Show updated ssh_key lines
grep "ssh_key =" "$file" | head -1
fi
done
echo ""
if [ "$REMOVE_KEY" = true ]; then
echo "SSH key has been successfully removed (set to empty string) in all main.tf files!"
else
echo "SSH key has been successfully updated in all main.tf files!"
fi

View File

@@ -8,7 +8,7 @@ resource "libvirt_domain" "domain" {
# The chipset q35, which does not support the IDE bus, does not work with the terraform-provider-libvirt cloud-init implementation,
# which creates an ISO attached to an IDE bus by default. Workaround is implemented
# https://github.com/dmacvicar/terraform-provider-libvirt/issues/1137#issuecomment-2592329846
# A cleaner solution might be the following :
# A cleaner solution might be this one :
# https://github.com/dmacvicar/terraform-provider-libvirt/pull/895#issuecomment-1911167872
xml {
@@ -20,19 +20,15 @@ resource "libvirt_domain" "domain" {
# ---- optional UEFI support ------------------------------------
# Firmware only add the string when a path is supplied
firmware = can(var.uefi_firmware) && length(var.uefi_firmware) > 0 ? var.uefi_firmware : null
firmware = local.detected_firmware
# NVRAM block dynamic block that is evaluated once per VM
dynamic "nvram" {
# create the block once if a firmware path *and* a template were given
for_each = (can(var.uefi_firmware) && length(var.uefi_firmware) > 0
&& can(var.uefi_nvram_template) && length(var.uefi_nvram_template) > 0
) ? [1] : []
for_each = (local.detected_firmware != null && local.detected_nvram != null) ? [1] : []
content {
# The NVRAM filename is perVM, but we can honour an optional suffix
file = "/var/lib/libvirt/qemu/nvram/${var.vm_name}-${count.index}${var.uefi_nvram_file_suffix}_VARS.fd"
template = var.uefi_nvram_template
template = local.detected_nvram
}
}
# ----------------------------------------------------------------

View File

@@ -13,7 +13,7 @@ variable "pool_name" {
variable "pool_path" {
description = "Path for the storage pool"
type = string
default = "/tmp/tf_tmp_storage"
default = "/opt/tf_tmp_storage"
}
variable "instance_count" {
@@ -71,7 +71,7 @@ variable "memory" {
variable "vcpu" {
description = "Number of virtual CPUs"
type = number
default = 2
default = 1
}
variable "network_mode" {
@@ -105,23 +105,26 @@ variable "dns_local_only" {
default = false
}
# Improved UEFI variables with automatic detection
# For backward compatibility with the current module interface
variable "uefi_firmware" {
description = <<EOT
Path to the UEFI firmware binary (OVMF_CODE.fd, QEMU_CODE.fd, ).
Leave empty (or omit on the module call) to create a plain BIOS VM.
Enable UEFI support. Set to true to enable UEFI with auto-detected firmware,
or provide a specific path to the firmware binary.
Set to false or omit to create a plain BIOS VM.
EOT
type = string
default = "" # BIOS only when empty
default = ""
}
variable "uefi_nvram_template" {
description = <<EOT
Path to an NVRAM template that backs the UEFI NVRAM.
If you specify a template, the VM will get a writable NVRAM block.
Leave empty for a plain BIOS VM or if you dont need UEFI NVRAM.
Leave empty for a plain BIOS VM or if you don't need UEFI NVRAM.
EOT
type = string
default = "" # no NVRAM when empty
default = ""
}
variable "uefi_nvram_file_suffix" {
@@ -138,4 +141,56 @@ variable "uefi_nvram_file_suffix" {
# Computed variable for network domain (derived from vm_name)
locals {
computed_network_domain = var.network_domain != "" ? var.network_domain : "${var.vm_name}.local"
# List of common UEFI firmware paths in order of preference
uefi_firmware_paths = [
"/usr/share/edk2/ovmf/OVMF_CODE.4m.fd",
"/usr/share/edk2/x64/OVMF_CODE.4m.fd",
"/usr/share/OVMF/OVMF_CODE.4m.fd",
"/usr/share/ovmf/OVMF_CODE.4m.fd",
"/usr/share/edk2/ovmf/OVMF_CODE.fd",
"/usr/share/edk2/x64/OVMF_CODE.fd",
"/usr/share/OVMF/OVMF_CODE.fd",
"/usr/share/ovmf/OVMF_CODE.fd"
]
uefi_nvram_paths = [
"/usr/share/edk2/ovmf/OVMF_VARS.4m.fd",
"/usr/share/edk2/x64/OVMF_VARS.4m.fd",
"/usr/share/OVMF/OVMF_VARS.4m.fd",
"/usr/share/ovmf/OVMF_VARS.4m.fd",
"/usr/share/edk2/ovmf/OVMF_VARS.fd",
"/usr/share/edk2/x64/OVMF_VARS.fd",
"/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/ovmf/OVMF_VARS.fd"
]
# Determine if UEFI should be enabled
uefi_enabled = (
var.uefi_firmware == "true" ||
var.uefi_firmware == true ||
(var.uefi_firmware != "" && var.uefi_firmware != false && var.uefi_firmware != null)
)
# Function to get first available firmware path or null
detected_firmware = (
local.uefi_enabled ? (
length(local.uefi_firmware_paths) > 0 ? (
length([for path in local.uefi_firmware_paths : path if fileexists(path)]) > 0 ?
[for path in local.uefi_firmware_paths : path if fileexists(path)][0] :
null
) : null
) : null
)
# Function to get first available NVRAM template or null
detected_nvram = (
local.uefi_enabled ? (
length(local.uefi_nvram_paths) > 0 ? (
length([for path in local.uefi_nvram_paths : path if fileexists(path)]) > 0 ?
[for path in local.uefi_nvram_paths : path if fileexists(path)][0] :
null
) : null
) : null
)
}