Networking in Proxmox breaks more setups than storage and compute combined. Not because it’s complicated — it’s actually simpler than most people expect. It breaks because virtualization networking requires consistency at Layer 2, and inconsistency is invisible until nothing works.
I’ve debugged countless “my VM has no network” issues. 99% were: wrong VLAN tag, wrong bridge, or physical switch misconfiguration. This is how to get networking right from the start.
The Default Setup
After a clean Proxmox install, you have:
Physical NIC (eno1/eth0) └── vmbr0 (Linux bridge) ├── Proxmox host (management IP) └── VMs connect hereThis is correct and works. Don’t overcomplicate it until you need to.
Check your current config:
cat /etc/network/interfacesDefault looks like:
auto loiface lo inet loopback
auto eno1iface eno1 inet manual
auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1 bridge-ports eno1 bridge-stp off bridge-fd 0Key points:
eno1has no IP (manual) — it’s just a bridge portvmbr0has the IP — this is your management address- VMs attach to
vmbr0and share the same network
Linux Bridges Explained
A bridge is a virtual switch. Physical NICs and virtual NICs connect to it.
┌─────────────────────────────┐ Physical Network │ vmbr0 (bridge) │ Virtual Network ─────────────────│ │────────────────── eno1 ───────│ port port │─── VM1 (tap100i0) │ port │─── VM2 (tap101i0) │ port │─── Proxmox host └─────────────────────────────┘All devices on the bridge see each other at Layer 2. Same broadcast domain, same VLAN (unless you add tagging).
Creating Additional Bridges
For network isolation, create multiple bridges:
# Edit network confignano /etc/network/interfacesAdd:
# Management network (existing)auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1 bridge-ports eno1 bridge-stp off bridge-fd 0
# DMZ network (new bridge, second NIC)auto vmbr1iface vmbr1 inet manual bridge-ports eno2 bridge-stp off bridge-fd 0
# Internal-only network (no physical port)auto vmbr2iface vmbr2 inet manual bridge-ports none bridge-stp off bridge-fd 0Apply:
ifreload -aNow you have:
vmbr0: Management + production VMsvmbr1: DMZ VMs (different physical NIC, isolated)vmbr2: Internal-only (VMs can talk to each other, no outside access)
VLANs
VLANs separate traffic on the same physical network. Essential when you have one physical NIC but need multiple isolated networks.
VLAN-Aware Bridge (Recommended)
Modern approach. One bridge handles multiple VLANs:
auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1 bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 bridge-pvid 1Key settings:
bridge-vlan-aware yes: Enable VLAN taggingbridge-vids 2-4094: Allow these VLANsbridge-pvid 1: Native VLAN (untagged traffic)
VMs specify their VLAN when connecting:
# In VM config (/etc/pve/qemu-server/100.conf)net0: virtio,bridge=vmbr0,tag=100Or via Web UI: VM → Hardware → Network → VLAN Tag: 100
Traditional VLAN Interfaces (Older Method)
Create a sub-interface for each VLAN:
auto eno1.100iface eno1.100 inet manual
auto vmbr100iface vmbr100 inet manual bridge-ports eno1.100 bridge-stp off bridge-fd 0This works but creates more interfaces. VLAN-aware bridges are cleaner.
Common VLAN Mistakes
1. Physical switch not configured for VLANs
Your Proxmox config is perfect, but the switch port is access-mode VLAN 1. Nothing works.
Fix: Configure switch port as trunk allowing your VLANs.
2. VLAN tag mismatch
VM is tagged VLAN 100, but there’s no VLAN 100 on the switch.
Fix: Verify VLANs exist end-to-end: switch, router, Proxmox.
3. Native VLAN confusion
Management traffic is untagged (PVID), VM traffic is tagged. If PVID doesn’t match switch native VLAN, management breaks.
Fix: Be explicit about native VLAN on both sides.
Bonding (Link Aggregation)
Multiple NICs acting as one for redundancy or throughput.
Bonding Modes
| Mode | Name | Use Case |
|---|---|---|
| 0 | balance-rr | Round-robin, requires switch support |
| 1 | active-backup | Failover, no switch config needed |
| 2 | balance-xor | XOR hash, requires switch support |
| 3 | broadcast | Send on all, niche uses |
| 4 | 802.3ad (LACP) | Dynamic aggregation, requires switch LACP |
| 5 | balance-tlb | Adaptive transmit, no switch config |
| 6 | balance-alb | Adaptive load balancing, no switch config |
Recommended:
- mode 1 (active-backup): Simplest, works everywhere, true redundancy
- mode 4 (LACP): Best throughput, but requires switch configuration
Active-Backup Bond (Easy)
auto bond0iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode active-backup bond-primary eno1
auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1 bridge-ports bond0 bridge-stp off bridge-fd 0Behavior: eno1 is active, eno2 is standby. If eno1 fails, eno2 takes over in 100ms.
LACP Bond (Best Performance)
Requires switch LACP configuration first:
auto bond0iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode 802.3ad bond-lacp-rate fast bond-xmit-hash-policy layer3+4
auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1 bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yesCheck bond status:
cat /proc/net/bonding/bond0Bonding Gotchas
Single flow doesn’t benefit: A single TCP connection uses one link. Bonding helps aggregate throughput, not single-connection speed.
Switch must match: LACP requires switch-side configuration. Mismatched settings = no connectivity.
MII monitoring: 100ms (bond-miimon 100) is standard. Lower = faster failover but more CPU.
Network Isolation
Keeping networks separate is as important as connecting them.
Isolated Internal Network
auto vmbr2iface vmbr2 inet manual bridge-ports none bridge-stp off bridge-fd 0VMs on vmbr2 can only talk to each other. No physical network, no internet.
Use cases:
- Database servers that only backends should reach
- Development environments
- Testing isolated from production
VMs Accessing Multiple Networks
A VM can have multiple NICs:
# VM confignet0: virtio,bridge=vmbr0,tag=10 # Production VLANnet1: virtio,bridge=vmbr2 # Internal onlyInside VM, configure each interface appropriately.
Proxmox Host Networking
The host itself needs network access for:
- Management (web UI, SSH)
- Corosync (clustering)
- Storage (NFS, Ceph, iSCSI)
- Updates
Management Network
Always use static IP for management:
auto vmbr0iface vmbr0 inet static address 10.0.0.10/24 gateway 10.0.0.1Never DHCP for a hypervisor. You need to know where it is.
Separate Storage Network (If Needed)
For high-performance storage (Ceph, iSCSI), dedicate a network:
auto vmbr1iface vmbr1 inet static address 10.10.0.10/24 bridge-ports eno2 bridge-stp off bridge-fd 0Configure storage to use this network, keeping management traffic separate.
Troubleshooting
VM Has No Network
Check in order:
# 1. Is the bridge up?ip link show vmbr0
# 2. Is the physical port up?ip link show eno1
# 3. Is the VM's tap interface in the bridge?bridge link show
# 4. Inside VM, is the interface up?# (via console, not SSH since network is broken)ip a
# 5. Can VM ping gateway?ping 10.0.0.1
# 6. Check for VLAN issuestcpdump -i vmbr0 -n icmpTraffic Not Reaching VM
# Check bridge forwardingsysctl net.bridge.bridge-nf-call-iptables# Should be 0, or iptables might interfere
# Check VM's interface is in bridgebridge link show master vmbr0# Should list tap100i0, tap101i0, etc.
# Capture on bridgetcpdump -i vmbr0 host 10.0.0.100 -nVLAN Traffic Not Working
# Check VLAN-aware bridgebridge vlan show
# Should show VLANs per port:# vmbr0 1 PVID Egress Untagged# eno1 1 PVID Egress Untagged# 100# 200# tap100i0 100 PVID Egress Untagged
# If VMs VLAN not listed, check VM configMy Network Layout
Simple Homelab
ISP Router (10.0.0.1) │ └── eno1 │ ┌────┴────┐ │ vmbr0 │ 10.0.0.10 (Proxmox) │ (bridge)│ ├─────────┤ │ VM1 │ 10.0.0.101 (VLAN-aware, tag=1) │ VM2 │ 10.0.0.102 (VLAN-aware, tag=1) └─────────┘Flat network. Simple. Everything on same subnet.
Production with VLANs
Core Switch (trunk port) │ VLANs: 10, 20, 30, 100 │ └── eno1 │ ┌────┴────────────────────┐ │ vmbr0 │ VLAN-aware │ (PVID 10 = management) │ ├─────────────────────────┤ │ tag=10: Management VMs │ │ tag=20: Production VMs │ │ tag=30: DMZ │ │ tag=100: Storage │ └─────────────────────────┘
Proxmox management: 10.10.0.10/24 (VLAN 10, untagged on bridge)Production VMs: 10.20.0.0/24 (VLAN 20)DMZ VMs: 10.30.0.0/24 (VLAN 30)Storage network: 10.100.0.0/24 (VLAN 100)One physical NIC, multiple isolated networks.
The Lesson
99% of virtualization network problems are inconsistent Layer 2.
The config looks right. Proxmox is configured. VMs have IPs. But nothing works. Why?
Because somewhere in the chain — switch port, VLAN configuration, bridge settings, VM tag — something doesn’t match.
Virtualization networking requires consistency:
- Switch port must trunk the right VLANs
- Bridge must be VLAN-aware if you’re tagging
- VM must use the correct tag
- Physical network must route between VLANs (if needed)
When it breaks:
- Start at the physical layer (is the cable plugged in?)
- Check switch configuration (is VLAN allowed?)
- Check bridge configuration (is VLAN-aware enabled?)
- Check VM configuration (is tag correct?)
- Check inside VM (is interface up?)
The fix is almost always a mismatch somewhere. Find it, fix it, document it so you don’t repeat it.