“What’s the IP of that VM?” shouldn’t require logging into Proxmox, checking DHCP leases, or guessing. IP addresses are infrastructure data. They should be queryable, predictable, and documented automatically.
Manual IP tracking breaks at scale. Spreadsheets get stale. DHCP gives different IPs after reboot. Static IPs require manual configuration. None of this scales.
IP addresses are data. They need to be collected automatically.
The IP Problem
VMs need IP addresses. Getting them reliably is harder than it looks:
DHCP challenges:
- IP changes on reboot (unless reserved)
- Lease expires, new IP assigned
- “What IP did that VM get?”
Static IP challenges:
- Manual configuration per VM
- Easy to have conflicts
- Doesn’t work with templates (need customization)
Cloud-init challenges:
- Works great for initial setup
- Changing IP requires VM recreation
- Need to track assigned IPs somewhere
Strategy 1: DHCP with MAC Reservations
Most reliable for dynamic environments. DHCP server reserves IP based on MAC address.
How It Works
1. VM created with specific MAC address2. MAC registered in DHCP server with reserved IP3. VM boots, requests DHCP4. DHCP server gives reserved IP5. IP is consistent across rebootsProxmox Side
Specify MAC address when creating VM:
# Create VM with specific MACqm create 100 --name web-server --net0 virtio=BC:24:11:00:01:00,bridge=vmbr0
# Or update existingqm set 100 --net0 virtio=BC:24:11:00:01:00,bridge=vmbr0Use a MAC address scheme:
BC:24:11:XX:YY:ZZ │ │ └─ Sequence (00-FF) │ └──── VM ID low byte └─────── VM ID high byte
Example:VM 100: BC:24:11:00:64:00 (0x64 = 100)VM 101: BC:24:11:00:65:00VM 256: BC:24:11:01:00:00Router Side (MikroTik Example)
# Add DHCP reservation/ip dhcp-server lease add address=10.0.0.100 mac-address=BC:24:11:00:64:00 server=dhcp1 comment="web-server"
# Or via script for bulk:foreach mac,ip in={ "BC:24:11:00:64:00"="10.0.0.100"; "BC:24:11:00:65:00"="10.0.0.101"; "BC:24:11:00:66:00"="10.0.0.102"} do={ /ip dhcp-server lease add address=$ip mac-address=$mac server=dhcp1}Router Side (OPNsense/pfSense)
Services → DHCPv4 → [Interface] → DHCP Static Mappings
MAC address: BC:24:11:00:64:00IP address: 10.0.0.100Hostname: web-serverRouter Side (VyOS)
configureset service dhcp-server shared-network-name LAN subnet 10.0.0.0/24 static-mapping web-server mac-address 'BC:24:11:00:64:00'set service dhcp-server shared-network-name LAN subnet 10.0.0.0/24 static-mapping web-server ip-address '10.0.0.100'commitStrategy 2: Cloud-Init Static IPs
For immutable VMs where IP is set at creation.
Terraform Example
resource "proxmox_vm_qemu" "server" { name = "web-server" clone = "ubuntu-template"
ipconfig0 = "ip=10.0.0.100/24,gw=10.0.0.1"
# IP is set via cloud-init at first boot}Manual Cloud-Init
qm set 100 --ipconfig0 ip=10.0.0.100/24,gw=10.0.0.1Tracking Static IPs
Maintain IP allocation in code:
networks: production: subnet: 10.0.0.0/24 gateway: 10.0.0.1 allocated: 10.0.0.10: proxmox-host 10.0.0.100: web-server-1 10.0.0.101: web-server-2 10.0.0.150: database
management: subnet: 10.10.0.0/24 gateway: 10.10.0.1 allocated: 10.10.0.10: proxmox-mgmt 10.10.0.100: monitoringStrategy 3: IPAM Integration
For larger environments, use dedicated IPAM (IP Address Management).
phpIPAM Integration
# Query IPAM for next available IPcurl -X POST "https://ipam.example.com/api/app/addresses/first_free/3/" \ -H "token: xxx" \ -d "hostname=new-server"
# Register IPcurl -X POST "https://ipam.example.com/api/app/addresses/" \ -H "token: xxx" \ -d "subnetId=3&ip=10.0.0.105&hostname=new-server&description=Web server"NetBox Integration
import pynetbox
nb = pynetbox.api('https://netbox.example.com', token='xxx')
# Get next available IPprefix = nb.ipam.prefixes.get(prefix='10.0.0.0/24')next_ip = prefix.available_ips.list()[0]
# Create IP assignmentnb.ipam.ip_addresses.create( address=str(next_ip), dns_name='web-server.lab.local', description='Web server', assigned_object_type='virtualization.virtualmachine', assigned_object_id=vm_id)Collecting VM IPs from Proxmox
Via QEMU Guest Agent
Requires qemu-guest-agent installed in VM:
# Get network info from running VMqm guest cmd 100 network-get-interfaces
# Output includes IP addresses# Parse with jqqm guest cmd 100 network-get-interfaces | jq -r '.[] | select(.name != "lo") | .["ip-addresses"][] | select(.["ip-address-type"] == "ipv4") | .["ip-address"]'Via API
# Get VM status including networkpvesh get /nodes/pve1/qemu/100/agent/network-get-interfaces
# Or for all VMsfor vmid in $(pvesh get /nodes/pve1/qemu --output-format json | jq -r '.[].vmid'); do echo "VM ${vmid}:" pvesh get /nodes/pve1/qemu/${vmid}/agent/network-get-interfaces 2>/dev/null | jq -r '.result[] | select(.name != "lo") | "\(.name): \(.["ip-addresses"][0]["ip-address"])"'doneInventory Script
#!/usr/bin/env python3import jsonimport subprocessfrom proxmoxer import ProxmoxAPI
proxmox = ProxmoxAPI('proxmox.lab.local', user='root@pam', password='xxx', verify_ssl=False)
inventory = {}
for node in proxmox.nodes.get(): node_name = node['node']
for vm in proxmox.nodes(node_name).qemu.get(): vmid = vm['vmid'] name = vm['name'] status = vm['status']
vm_info = { 'name': name, 'node': node_name, 'status': status, 'ip_addresses': [] }
if status == 'running': try: interfaces = proxmox.nodes(node_name).qemu(vmid).agent('network-get-interfaces').get() for iface in interfaces['result']: if iface['name'] != 'lo': for addr in iface.get('ip-addresses', []): if addr['ip-address-type'] == 'ipv4': vm_info['ip_addresses'].append(addr['ip-address']) except: pass # Guest agent not available
inventory[vmid] = vm_info
print(json.dumps(inventory, indent=2))Dynamic Ansible Inventory
Generate Ansible inventory from Proxmox:
#!/usr/bin/env python3import jsonfrom proxmoxer import ProxmoxAPI
def get_inventory(): proxmox = ProxmoxAPI('proxmox.lab.local', user='ansible@pve!inventory', token_name='inventory', token_value='xxx', verify_ssl=False)
inventory = { '_meta': {'hostvars': {}}, 'all': {'children': ['proxmox_vms']}, 'proxmox_vms': {'hosts': []} }
for node in proxmox.nodes.get(): for vm in proxmox.nodes(node['node']).qemu.get(): if vm['status'] != 'running': continue
vmid = vm['vmid'] name = vm['name']
# Get IP from guest agent try: interfaces = proxmox.nodes(node['node']).qemu(vmid).agent('network-get-interfaces').get() for iface in interfaces['result']: if iface['name'] != 'lo': for addr in iface.get('ip-addresses', []): if addr['ip-address-type'] == 'ipv4': ip = addr['ip-address'] inventory['proxmox_vms']['hosts'].append(name) inventory['_meta']['hostvars'][name] = { 'ansible_host': ip, 'proxmox_vmid': vmid, 'proxmox_node': node['node'] } break except: pass
return inventory
if __name__ == '__main__': print(json.dumps(get_inventory(), indent=2))Usage:
# Use dynamic inventoryansible -i proxmox_inventory.py all -m ping
# In ansible.cfg[defaults]inventory = ./proxmox_inventory.pyDNS Integration
Automatically register VMs in DNS:
With PowerDNS API
#!/bin/bashVM_NAME=$1IP=$2DOMAIN="lab.local"PDNS_API="http://dns.lab.local:8081/api/v1"PDNS_KEY="xxx"
# Add A recordcurl -X PATCH "${PDNS_API}/servers/localhost/zones/${DOMAIN}." \ -H "X-API-Key: ${PDNS_KEY}" \ -H "Content-Type: application/json" \ -d "{ \"rrsets\": [{ \"name\": \"${VM_NAME}.${DOMAIN}.\", \"type\": \"A\", \"ttl\": 300, \"changetype\": \"REPLACE\", \"records\": [{\"content\": \"${IP}\", \"disabled\": false}] }] }"With nsupdate (BIND)
#!/bin/bashVM_NAME=$1IP=$2DOMAIN="lab.local"DNS_SERVER="10.0.0.53"KEY_FILE="/etc/bind/keys/update.key"
nsupdate -k ${KEY_FILE} << EOFserver ${DNS_SERVER}zone ${DOMAIN}update delete ${VM_NAME}.${DOMAIN} Aupdate add ${VM_NAME}.${DOMAIN} 300 A ${IP}sendEOFAutomation Pipeline
Complete workflow:
# vm-creation.yml (Ansible)- name: Create VM with managed IP hosts: localhost vars: vm_name: web-server vm_id: 100 mac_address: "BC:24:11:00:64:00" ip_address: "10.0.0.100"
tasks: - name: Create VM in Proxmox community.general.proxmox_kvm: api_host: proxmox.lab.local api_token_id: terraform api_token_secret: "{{ vault_proxmox_token }}" node: pve1 vmid: "{{ vm_id }}" name: "{{ vm_name }}" clone: ubuntu-template net: net0: "virtio={{ mac_address }},bridge=vmbr0"
- name: Register DHCP reservation on router community.routeros.command: commands: - /ip dhcp-server lease add address={{ ip_address }} mac-address={{ mac_address }} server=dhcp1 comment="{{ vm_name }}" delegate_to: router
- name: Register DNS community.general.nsupdate: server: "10.0.0.53" zone: "lab.local" record: "{{ vm_name }}" type: "A" value: "{{ ip_address }}"
- name: Start VM community.general.proxmox_kvm: api_host: proxmox.lab.local api_token_id: terraform api_token_secret: "{{ vault_proxmox_token }}" node: pve1 vmid: "{{ vm_id }}" state: started
- name: Wait for VM to be reachable wait_for: host: "{{ ip_address }}" port: 22 delay: 10 timeout: 300
- name: Update inventory lineinfile: path: inventory/hosts line: "{{ vm_name }} ansible_host={{ ip_address }}"The Lesson
IP addresses are data. They must be collected automatically.
Manual IP management fails because:
- Humans forget to update documentation
- Spreadsheets get stale
- “What IP is that?” becomes a daily question
- Conflicts happen because no one checked
Automated IP management works because:
- DHCP reservations are code, versioned and reviewable
- Inventory is generated from actual state
- DNS updates automatically
- Conflicts are detected before deployment
Choose your strategy based on scale:
- Small (1-20 VMs): DHCP reservations, manual tracking
- Medium (20-100 VMs): Cloud-init static IPs, generated inventory
- Large (100+ VMs): IPAM integration (NetBox, phpIPAM)
The goal is always the same: asking “what’s the IP?” should return an answer in seconds, from automation, not from hunting through UIs and logs.