IP Management: Getting VM IPs Reliably (DHCP, MAC Mapping, Integrations)

“What’s the IP of that VM?” shouldn’t require logging into Proxmox, checking DHCP leases, or guessing. IP addresses are infrastructure data. They should be queryable, predictable, and documented automatically.

Manual IP tracking breaks at scale. Spreadsheets get stale. DHCP gives different IPs after reboot. Static IPs require manual configuration. None of this scales.

IP addresses are data. They need to be collected automatically.

The IP Problem

VMs need IP addresses. Getting them reliably is harder than it looks:

DHCP challenges:

  • IP changes on reboot (unless reserved)
  • Lease expires, new IP assigned
  • “What IP did that VM get?”

Static IP challenges:

  • Manual configuration per VM
  • Easy to have conflicts
  • Doesn’t work with templates (need customization)

Cloud-init challenges:

  • Works great for initial setup
  • Changing IP requires VM recreation
  • Need to track assigned IPs somewhere

Strategy 1: DHCP with MAC Reservations

Most reliable for dynamic environments. DHCP server reserves IP based on MAC address.

How It Works

1. VM created with specific MAC address
2. MAC registered in DHCP server with reserved IP
3. VM boots, requests DHCP
4. DHCP server gives reserved IP
5. IP is consistent across reboots

Proxmox Side

Specify MAC address when creating VM:

Terminal window
# Create VM with specific MAC
qm create 100 --name web-server --net0 virtio=BC:24:11:00:01:00,bridge=vmbr0
# Or update existing
qm set 100 --net0 virtio=BC:24:11:00:01:00,bridge=vmbr0

Use a MAC address scheme:

BC:24:11:XX:YY:ZZ
│ │ └─ Sequence (00-FF)
│ └──── VM ID low byte
└─────── VM ID high byte
Example:
VM 100: BC:24:11:00:64:00 (0x64 = 100)
VM 101: BC:24:11:00:65:00
VM 256: BC:24:11:01:00:00

Router Side (MikroTik Example)

Terminal window
# Add DHCP reservation
/ip dhcp-server lease add address=10.0.0.100 mac-address=BC:24:11:00:64:00 server=dhcp1 comment="web-server"
# Or via script for bulk
:foreach mac,ip in={
"BC:24:11:00:64:00"="10.0.0.100";
"BC:24:11:00:65:00"="10.0.0.101";
"BC:24:11:00:66:00"="10.0.0.102"
} do={
/ip dhcp-server lease add address=$ip mac-address=$mac server=dhcp1
}

Router Side (OPNsense/pfSense)

Services → DHCPv4 → [Interface] → DHCP Static Mappings

MAC address: BC:24:11:00:64:00
IP address: 10.0.0.100
Hostname: web-server

Router Side (VyOS)

Terminal window
configure
set service dhcp-server shared-network-name LAN subnet 10.0.0.0/24 static-mapping web-server mac-address 'BC:24:11:00:64:00'
set service dhcp-server shared-network-name LAN subnet 10.0.0.0/24 static-mapping web-server ip-address '10.0.0.100'
commit

Strategy 2: Cloud-Init Static IPs

For immutable VMs where IP is set at creation.

Terraform Example

resource "proxmox_vm_qemu" "server" {
name = "web-server"
clone = "ubuntu-template"
ipconfig0 = "ip=10.0.0.100/24,gw=10.0.0.1"
# IP is set via cloud-init at first boot
}

Manual Cloud-Init

Terminal window
qm set 100 --ipconfig0 ip=10.0.0.100/24,gw=10.0.0.1

Tracking Static IPs

Maintain IP allocation in code:

inventory/ip-allocation.yml
networks:
production:
subnet: 10.0.0.0/24
gateway: 10.0.0.1
allocated:
10.0.0.10: proxmox-host
10.0.0.100: web-server-1
10.0.0.101: web-server-2
10.0.0.150: database
management:
subnet: 10.10.0.0/24
gateway: 10.10.0.1
allocated:
10.10.0.10: proxmox-mgmt
10.10.0.100: monitoring

Strategy 3: IPAM Integration

For larger environments, use dedicated IPAM (IP Address Management).

phpIPAM Integration

Terminal window
# Query IPAM for next available IP
curl -X POST "https://ipam.example.com/api/app/addresses/first_free/3/" \
-H "token: xxx" \
-d "hostname=new-server"
# Register IP
curl -X POST "https://ipam.example.com/api/app/addresses/" \
-H "token: xxx" \
-d "subnetId=3&ip=10.0.0.105&hostname=new-server&description=Web server"

NetBox Integration

import pynetbox
nb = pynetbox.api('https://netbox.example.com', token='xxx')
# Get next available IP
prefix = nb.ipam.prefixes.get(prefix='10.0.0.0/24')
next_ip = prefix.available_ips.list()[0]
# Create IP assignment
nb.ipam.ip_addresses.create(
address=str(next_ip),
dns_name='web-server.lab.local',
description='Web server',
assigned_object_type='virtualization.virtualmachine',
assigned_object_id=vm_id
)

Collecting VM IPs from Proxmox

Via QEMU Guest Agent

Requires qemu-guest-agent installed in VM:

Terminal window
# Get network info from running VM
qm guest cmd 100 network-get-interfaces
# Output includes IP addresses
# Parse with jq
qm guest cmd 100 network-get-interfaces | jq -r '.[] | select(.name != "lo") | .["ip-addresses"][] | select(.["ip-address-type"] == "ipv4") | .["ip-address"]'

Via API

Terminal window
# Get VM status including network
pvesh get /nodes/pve1/qemu/100/agent/network-get-interfaces
# Or for all VMs
for vmid in $(pvesh get /nodes/pve1/qemu --output-format json | jq -r '.[].vmid'); do
echo "VM ${vmid}:"
pvesh get /nodes/pve1/qemu/${vmid}/agent/network-get-interfaces 2>/dev/null | jq -r '.result[] | select(.name != "lo") | "\(.name): \(.["ip-addresses"][0]["ip-address"])"'
done

Inventory Script

collect-inventory.py
#!/usr/bin/env python3
import json
import subprocess
from proxmoxer import ProxmoxAPI
proxmox = ProxmoxAPI('proxmox.lab.local', user='root@pam', password='xxx', verify_ssl=False)
inventory = {}
for node in proxmox.nodes.get():
node_name = node['node']
for vm in proxmox.nodes(node_name).qemu.get():
vmid = vm['vmid']
name = vm['name']
status = vm['status']
vm_info = {
'name': name,
'node': node_name,
'status': status,
'ip_addresses': []
}
if status == 'running':
try:
interfaces = proxmox.nodes(node_name).qemu(vmid).agent('network-get-interfaces').get()
for iface in interfaces['result']:
if iface['name'] != 'lo':
for addr in iface.get('ip-addresses', []):
if addr['ip-address-type'] == 'ipv4':
vm_info['ip_addresses'].append(addr['ip-address'])
except:
pass # Guest agent not available
inventory[vmid] = vm_info
print(json.dumps(inventory, indent=2))

Dynamic Ansible Inventory

Generate Ansible inventory from Proxmox:

proxmox_inventory.py
#!/usr/bin/env python3
import json
from proxmoxer import ProxmoxAPI
def get_inventory():
proxmox = ProxmoxAPI('proxmox.lab.local',
user='ansible@pve!inventory',
token_name='inventory',
token_value='xxx',
verify_ssl=False)
inventory = {
'_meta': {'hostvars': {}},
'all': {'children': ['proxmox_vms']},
'proxmox_vms': {'hosts': []}
}
for node in proxmox.nodes.get():
for vm in proxmox.nodes(node['node']).qemu.get():
if vm['status'] != 'running':
continue
vmid = vm['vmid']
name = vm['name']
# Get IP from guest agent
try:
interfaces = proxmox.nodes(node['node']).qemu(vmid).agent('network-get-interfaces').get()
for iface in interfaces['result']:
if iface['name'] != 'lo':
for addr in iface.get('ip-addresses', []):
if addr['ip-address-type'] == 'ipv4':
ip = addr['ip-address']
inventory['proxmox_vms']['hosts'].append(name)
inventory['_meta']['hostvars'][name] = {
'ansible_host': ip,
'proxmox_vmid': vmid,
'proxmox_node': node['node']
}
break
except:
pass
return inventory
if __name__ == '__main__':
print(json.dumps(get_inventory(), indent=2))

Usage:

Terminal window
# Use dynamic inventory
ansible -i proxmox_inventory.py all -m ping
# In ansible.cfg
[defaults]
inventory = ./proxmox_inventory.py

DNS Integration

Automatically register VMs in DNS:

With PowerDNS API

register-dns.sh
#!/bin/bash
VM_NAME=$1
IP=$2
DOMAIN="lab.local"
PDNS_API="http://dns.lab.local:8081/api/v1"
PDNS_KEY="xxx"
# Add A record
curl -X PATCH "${PDNS_API}/servers/localhost/zones/${DOMAIN}." \
-H "X-API-Key: ${PDNS_KEY}" \
-H "Content-Type: application/json" \
-d "{
\"rrsets\": [{
\"name\": \"${VM_NAME}.${DOMAIN}.\",
\"type\": \"A\",
\"ttl\": 300,
\"changetype\": \"REPLACE\",
\"records\": [{\"content\": \"${IP}\", \"disabled\": false}]
}]
}"

With nsupdate (BIND)

update-dns.sh
#!/bin/bash
VM_NAME=$1
IP=$2
DOMAIN="lab.local"
DNS_SERVER="10.0.0.53"
KEY_FILE="/etc/bind/keys/update.key"
nsupdate -k ${KEY_FILE} << EOF
server ${DNS_SERVER}
zone ${DOMAIN}
update delete ${VM_NAME}.${DOMAIN} A
update add ${VM_NAME}.${DOMAIN} 300 A ${IP}
send
EOF

Automation Pipeline

Complete workflow:

# vm-creation.yml (Ansible)
- name: Create VM with managed IP
hosts: localhost
vars:
vm_name: web-server
vm_id: 100
mac_address: "BC:24:11:00:64:00"
ip_address: "10.0.0.100"
tasks:
- name: Create VM in Proxmox
community.general.proxmox_kvm:
api_host: proxmox.lab.local
api_token_id: terraform
api_token_secret: "{{ vault_proxmox_token }}"
node: pve1
vmid: "{{ vm_id }}"
name: "{{ vm_name }}"
clone: ubuntu-template
net:
net0: "virtio={{ mac_address }},bridge=vmbr0"
- name: Register DHCP reservation on router
community.routeros.command:
commands:
- /ip dhcp-server lease add address={{ ip_address }} mac-address={{ mac_address }} server=dhcp1 comment="{{ vm_name }}"
delegate_to: router
- name: Register DNS
community.general.nsupdate:
server: "10.0.0.53"
zone: "lab.local"
record: "{{ vm_name }}"
type: "A"
value: "{{ ip_address }}"
- name: Start VM
community.general.proxmox_kvm:
api_host: proxmox.lab.local
api_token_id: terraform
api_token_secret: "{{ vault_proxmox_token }}"
node: pve1
vmid: "{{ vm_id }}"
state: started
- name: Wait for VM to be reachable
wait_for:
host: "{{ ip_address }}"
port: 22
delay: 10
timeout: 300
- name: Update inventory
lineinfile:
path: inventory/hosts
line: "{{ vm_name }} ansible_host={{ ip_address }}"

The Lesson

IP addresses are data. They must be collected automatically.

Manual IP management fails because:

  • Humans forget to update documentation
  • Spreadsheets get stale
  • “What IP is that?” becomes a daily question
  • Conflicts happen because no one checked

Automated IP management works because:

  • DHCP reservations are code, versioned and reviewable
  • Inventory is generated from actual state
  • DNS updates automatically
  • Conflicts are detected before deployment

Choose your strategy based on scale:

  • Small (1-20 VMs): DHCP reservations, manual tracking
  • Medium (20-100 VMs): Cloud-init static IPs, generated inventory
  • Large (100+ VMs): IPAM integration (NetBox, phpIPAM)

The goal is always the same: asking “what’s the IP?” should return an answer in seconds, from automation, not from hunting through UIs and logs.