Clicking through the Proxmox UI works for one VM. It doesn’t work for thirty VMs that need to be consistent. It doesn’t work when you need to recreate an environment. It doesn’t work when “what changed?” matters.
Terraform brings Infrastructure as Code to Proxmox: define VMs in files, track changes in Git, apply reproducibly. But Terraform with Proxmox has quirks. The provider has limitations. State can drift. Changes can be destructive.
This is how to use Terraform with Proxmox in patterns that won’t rot.
Why Terraform for Proxmox
Terraform solves:
- Reproducible environments (dev = staging = prod)
- Change tracking (what changed, when, why)
- Collaboration (PRs, code review for infrastructure)
- Documentation (code is documentation)
- Disaster recovery (rebuild from code)
Terraform doesn’t solve:
- Day-2 operations inside VMs (use Ansible)
- Configuration management (use Ansible, Chef, Puppet)
- One-off tasks (just use the UI)
Provider Setup
Install Provider
In your Terraform project:
terraform { required_version = ">= 1.0"
required_providers { proxmox = { source = "Telmate/proxmox" version = "~> 3.0" } }}Provider Configuration
provider "proxmox" { pm_api_url = "https://proxmox.lab.local:8006/api2/json" pm_api_token_id = "terraform@pve!automation" pm_api_token_secret = var.proxmox_api_secret
# TLS verification pm_tls_insecure = false # Set true only for self-signed certs
# Parallel operations pm_parallel = 4
# Logging (for debugging) pm_log_enable = true pm_log_file = "terraform-plugin-proxmox.log" pm_log_levels = { _default = "debug" _capturelog = "" }}API Token Creation
On Proxmox:
# Create dedicated Terraform userpveum user add terraform@pve
# Create token with privilege separation disabledpveum user token add terraform@pve automation --privsep 0
# Grant permissionspveum acl modify / --user terraform@pve --role PVEAdminStore token in environment or secrets manager:
export PM_API_TOKEN_SECRET="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"variable "proxmox_api_secret" { description = "Proxmox API token secret" type = string sensitive = true default = "" # Use TF_VAR_proxmox_api_secret env var}Basic VM Resource
Clone from Template
resource "proxmox_vm_qemu" "web_server" { name = "web-server-01" target_node = "pve1"
# Clone from template clone = "ubuntu-2404-template" full_clone = true
# Hardware cores = 2 sockets = 1 memory = 4096
# Agent (required for IP retrieval) agent = 1
# Disk disks { scsi { scsi0 { disk { size = "32G" storage = "local-zfs" } } } }
# Network network { model = "virtio" bridge = "vmbr0" tag = 10 }
# Cloud-init os_type = "cloud-init" ciuser = "admin" cipassword = var.vm_password sshkeys = file("~/.ssh/id_ed25519.pub")
ipconfig0 = "ip=10.10.0.100/24,gw=10.10.0.1"
# Lifecycle lifecycle { ignore_changes = [ network, # Don't recreate on network changes ] }}Output VM Info
output "web_server_ip" { value = proxmox_vm_qemu.web_server.default_ipv4_address description = "Web server IP address"}
output "web_server_id" { value = proxmox_vm_qemu.web_server.vmid description = "VM ID in Proxmox"}Module Structure
For reusable, maintainable code:
proxmox-terraform/├── modules/│ ├── vm/│ │ ├── main.tf│ │ ├── variables.tf│ │ └── outputs.tf│ └── lxc/│ ├── main.tf│ ├── variables.tf│ └── outputs.tf├── environments/│ ├── dev/│ │ ├── main.tf│ │ ├── variables.tf│ │ └── terraform.tfvars│ ├── staging/│ │ └── ...│ └── prod/│ └── ...├── .gitignore└── README.mdVM Module
variable "name" { description = "VM name" type = string}
variable "target_node" { description = "Proxmox node to create VM on" type = string default = "pve1"}
variable "template" { description = "Template to clone from" type = string default = "ubuntu-2404-template"}
variable "cores" { description = "Number of CPU cores" type = number default = 2}
variable "memory" { description = "Memory in MB" type = number default = 2048}
variable "disk_size" { description = "Disk size" type = string default = "32G"}
variable "storage" { description = "Storage pool" type = string default = "local-zfs"}
variable "network_bridge" { description = "Network bridge" type = string default = "vmbr0"}
variable "vlan_tag" { description = "VLAN tag" type = number default = null}
variable "ip_address" { description = "Static IP address with CIDR" type = string}
variable "gateway" { description = "Default gateway" type = string}
variable "ssh_keys" { description = "SSH public keys" type = string}
variable "tags" { description = "VM tags" type = list(string) default = []}resource "proxmox_vm_qemu" "vm" { name = var.name target_node = var.target_node clone = var.template full_clone = true
cores = var.cores sockets = 1 memory = var.memory agent = 1
disks { scsi { scsi0 { disk { size = var.disk_size storage = var.storage } } } }
network { model = "virtio" bridge = var.network_bridge tag = var.vlan_tag }
os_type = "cloud-init" sshkeys = var.ssh_keys
ipconfig0 = "ip=${var.ip_address},gw=${var.gateway}"
tags = join(",", var.tags)
lifecycle { ignore_changes = [network] }}output "id" { value = proxmox_vm_qemu.vm.vmid}
output "name" { value = proxmox_vm_qemu.vm.name}
output "ip_address" { value = proxmox_vm_qemu.vm.default_ipv4_address}Using Modules
module "web_servers" { source = "../../modules/vm"
count = 2
name = "web-${count.index + 1}" target_node = "pve1" template = "ubuntu-2404-template"
cores = 2 memory = 4096 disk_size = "32G"
ip_address = "10.10.0.${100 + count.index}/24" gateway = "10.10.0.1"
ssh_keys = file("~/.ssh/id_ed25519.pub")
tags = ["web", "dev"]}
module "database" { source = "../../modules/vm"
name = "db-1" target_node = "pve1" template = "ubuntu-2404-template"
cores = 4 memory = 8192 disk_size = "100G"
ip_address = "10.10.0.50/24" gateway = "10.10.0.1"
ssh_keys = file("~/.ssh/id_ed25519.pub")
tags = ["database", "dev"]}State Management
Remote State
Never use local state for teams:
terraform { backend "s3" { bucket = "terraform-state" key = "proxmox/dev/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-locks" }}Or with Terraform Cloud:
terraform { cloud { organization = "my-org" workspaces { name = "proxmox-dev" } }}State Drift
Proxmox changes outside Terraform cause drift:
# Check for driftterraform plan
# If drift detected, either:# 1. Import the change into state# 2. Revert the change in Proxmox# 3. Update Terraform to matchImport Existing Resources
# Import existing VMterraform import proxmox_vm_qemu.existing 'pve1/qemu/100'
# Then add to your .tf fileresource "proxmox_vm_qemu" "existing" { name = "existing-vm" target_node = "pve1" # ... match existing config}Safe Changes
Lifecycle Rules
Prevent accidental destruction:
resource "proxmox_vm_qemu" "production_db" { name = "prod-db" # ...
lifecycle { prevent_destroy = true
# Don't recreate for these changes ignore_changes = [ network, disk, ] }}Plan Before Apply
Always review:
# Generate planterraform plan -out=tfplan
# Review plan fileterraform show tfplan
# Only if plan looks goodterraform apply tfplanTargeted Changes
Limit blast radius:
# Only apply to specific resourceterraform apply -target=module.web_servers
# Only apply to specific instanceterraform apply -target='module.web_servers[0]'Variables and Environments
Environment-Specific Variables
environment = "dev"vm_count = 2vm_size = "small"
# environments/prod/terraform.tfvarsenvironment = "prod"vm_count = 5vm_size = "large"Variable Validation
variable "environment" { description = "Environment name" type = string
validation { condition = contains(["dev", "staging", "prod"], var.environment) error_message = "Environment must be dev, staging, or prod." }}
variable "vm_size" { description = "VM size preset" type = string default = "small"
validation { condition = contains(["small", "medium", "large"], var.vm_size) error_message = "VM size must be small, medium, or large." }}Size Presets
locals { vm_sizes = { small = { cores = 2 memory = 2048 disk = "32G" } medium = { cores = 4 memory = 4096 disk = "64G" } large = { cores = 8 memory = 8192 disk = "128G" } }
selected_size = local.vm_sizes[var.vm_size]}
# Usageresource "proxmox_vm_qemu" "vm" { cores = local.selected_size.cores memory = local.selected_size.memory # ...}Common Patterns
Count vs For Each
# Count: Simple numbered resourcesresource "proxmox_vm_qemu" "worker" { count = 3 name = "worker-${count.index + 1}" # ...}
# For_each: Named resources (more stable)variable "vms" { default = { web = { ip = "10.10.0.100", cores = 2 } api = { ip = "10.10.0.101", cores = 4 } worker = { ip = "10.10.0.102", cores = 2 } }}
resource "proxmox_vm_qemu" "server" { for_each = var.vms
name = each.key cores = each.value.cores
ipconfig0 = "ip=${each.value.ip}/24,gw=10.10.0.1"}for_each is safer — removing middle item doesn’t shift others.
Dynamic Blocks
variable "additional_disks" { default = [ { size = "100G", storage = "local-zfs" }, { size = "200G", storage = "ceph-pool" } ]}
resource "proxmox_vm_qemu" "vm" { # ...
dynamic "disk" { for_each = var.additional_disks content { size = disk.value.size storage = disk.value.storage type = "scsi" } }}Conditional Resources
variable "create_backup_server" { default = false}
resource "proxmox_vm_qemu" "backup" { count = var.create_backup_server ? 1 : 0 name = "backup-server" # ...}Debugging
Provider Logs
provider "proxmox" { pm_log_enable = true pm_log_file = "terraform-plugin-proxmox.log" pm_log_levels = { _default = "debug" }}Common Issues
1. Template not found:
Error: 500 Configuration file 'nodes/pve1/qemu-server/xyz.conf' does not existFix: Verify template name matches exactly.
2. IP not detected:
Output: default_ipv4_address = ""Fix: Ensure agent = 1 and qemu-guest-agent installed in template.
3. Disk changes cause recreation:
Fix: Add disk to ignore_changes in lifecycle block.
The Lesson
IaC is about predictability, not faster clicking.
The goal of Terraform isn’t to create VMs faster than the UI. It’s to:
- Know what exists: Code defines reality
- Know what changed: Git history shows when and why
- Reproduce reliably: Same code = same infrastructure
- Collaborate safely: Code review before apply
The patterns that survive:
- Modules for reusability
- Remote state for teams
- Lifecycle rules for safety
- Variables for flexibility
- Plan before apply always
Terraform with Proxmox has rough edges. The provider isn’t perfect. But imperfect IaC beats clicking through a UI every time you need to remember “how did I configure that?”