Networking Baseline: Bridges, VLANs, Bonding — and the Mistakes I Made

Networking in Proxmox breaks more setups than storage and compute combined. Not because it’s complicated — it’s actually simpler than most people expect. It breaks because virtualization networking requires consistency at Layer 2, and inconsistency is invisible until nothing works.

I’ve debugged countless “my VM has no network” issues. 99% were: wrong VLAN tag, wrong bridge, or physical switch misconfiguration. This is how to get networking right from the start.

The Default Setup

After a clean Proxmox install, you have:

Physical NIC (eno1/eth0)
└── vmbr0 (Linux bridge)
├── Proxmox host (management IP)
└── VMs connect here

This is correct and works. Don’t overcomplicate it until you need to.

Check your current config:

Terminal window
cat /etc/network/interfaces

Default looks like:

auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

Key points:

  • eno1 has no IP (manual) — it’s just a bridge port
  • vmbr0 has the IP — this is your management address
  • VMs attach to vmbr0 and share the same network

Linux Bridges Explained

A bridge is a virtual switch. Physical NICs and virtual NICs connect to it.

┌─────────────────────────────┐
Physical Network │ vmbr0 (bridge) │ Virtual Network
─────────────────│ │──────────────────
eno1 ───────│ port port │─── VM1 (tap100i0)
│ port │─── VM2 (tap101i0)
│ port │─── Proxmox host
└─────────────────────────────┘

All devices on the bridge see each other at Layer 2. Same broadcast domain, same VLAN (unless you add tagging).

Creating Additional Bridges

For network isolation, create multiple bridges:

Terminal window
# Edit network config
nano /etc/network/interfaces

Add:

# Management network (existing)
auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
# DMZ network (new bridge, second NIC)
auto vmbr1
iface vmbr1 inet manual
bridge-ports eno2
bridge-stp off
bridge-fd 0
# Internal-only network (no physical port)
auto vmbr2
iface vmbr2 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0

Apply:

Terminal window
ifreload -a

Now you have:

  • vmbr0: Management + production VMs
  • vmbr1: DMZ VMs (different physical NIC, isolated)
  • vmbr2: Internal-only (VMs can talk to each other, no outside access)

VLANs

VLANs separate traffic on the same physical network. Essential when you have one physical NIC but need multiple isolated networks.

Modern approach. One bridge handles multiple VLANs:

auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
bridge-pvid 1

Key settings:

  • bridge-vlan-aware yes: Enable VLAN tagging
  • bridge-vids 2-4094: Allow these VLANs
  • bridge-pvid 1: Native VLAN (untagged traffic)

VMs specify their VLAN when connecting:

# In VM config (/etc/pve/qemu-server/100.conf)
net0: virtio,bridge=vmbr0,tag=100

Or via Web UI: VM → Hardware → Network → VLAN Tag: 100

Traditional VLAN Interfaces (Older Method)

Create a sub-interface for each VLAN:

auto eno1.100
iface eno1.100 inet manual
auto vmbr100
iface vmbr100 inet manual
bridge-ports eno1.100
bridge-stp off
bridge-fd 0

This works but creates more interfaces. VLAN-aware bridges are cleaner.

Common VLAN Mistakes

1. Physical switch not configured for VLANs

Your Proxmox config is perfect, but the switch port is access-mode VLAN 1. Nothing works.

Fix: Configure switch port as trunk allowing your VLANs.

2. VLAN tag mismatch

VM is tagged VLAN 100, but there’s no VLAN 100 on the switch.

Fix: Verify VLANs exist end-to-end: switch, router, Proxmox.

3. Native VLAN confusion

Management traffic is untagged (PVID), VM traffic is tagged. If PVID doesn’t match switch native VLAN, management breaks.

Fix: Be explicit about native VLAN on both sides.

Multiple NICs acting as one for redundancy or throughput.

Bonding Modes

ModeNameUse Case
0balance-rrRound-robin, requires switch support
1active-backupFailover, no switch config needed
2balance-xorXOR hash, requires switch support
3broadcastSend on all, niche uses
4802.3ad (LACP)Dynamic aggregation, requires switch LACP
5balance-tlbAdaptive transmit, no switch config
6balance-albAdaptive load balancing, no switch config

Recommended:

  • mode 1 (active-backup): Simplest, works everywhere, true redundancy
  • mode 4 (LACP): Best throughput, but requires switch configuration

Active-Backup Bond (Easy)

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode active-backup
bond-primary eno1
auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0

Behavior: eno1 is active, eno2 is standby. If eno1 fails, eno2 takes over in 100ms.

LACP Bond (Best Performance)

Requires switch LACP configuration first:

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
bond-xmit-hash-policy layer3+4
auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes

Check bond status:

Terminal window
cat /proc/net/bonding/bond0

Bonding Gotchas

Single flow doesn’t benefit: A single TCP connection uses one link. Bonding helps aggregate throughput, not single-connection speed.

Switch must match: LACP requires switch-side configuration. Mismatched settings = no connectivity.

MII monitoring: 100ms (bond-miimon 100) is standard. Lower = faster failover but more CPU.

Network Isolation

Keeping networks separate is as important as connecting them.

Isolated Internal Network

auto vmbr2
iface vmbr2 inet manual
bridge-ports none
bridge-stp off
bridge-fd 0

VMs on vmbr2 can only talk to each other. No physical network, no internet.

Use cases:

  • Database servers that only backends should reach
  • Development environments
  • Testing isolated from production

VMs Accessing Multiple Networks

A VM can have multiple NICs:

# VM config
net0: virtio,bridge=vmbr0,tag=10 # Production VLAN
net1: virtio,bridge=vmbr2 # Internal only

Inside VM, configure each interface appropriately.

Proxmox Host Networking

The host itself needs network access for:

  • Management (web UI, SSH)
  • Corosync (clustering)
  • Storage (NFS, Ceph, iSCSI)
  • Updates

Management Network

Always use static IP for management:

auto vmbr0
iface vmbr0 inet static
address 10.0.0.10/24
gateway 10.0.0.1

Never DHCP for a hypervisor. You need to know where it is.

Separate Storage Network (If Needed)

For high-performance storage (Ceph, iSCSI), dedicate a network:

auto vmbr1
iface vmbr1 inet static
address 10.10.0.10/24
bridge-ports eno2
bridge-stp off
bridge-fd 0

Configure storage to use this network, keeping management traffic separate.

Troubleshooting

VM Has No Network

Check in order:

Terminal window
# 1. Is the bridge up?
ip link show vmbr0
# 2. Is the physical port up?
ip link show eno1
# 3. Is the VM's tap interface in the bridge?
bridge link show
# 4. Inside VM, is the interface up?
# (via console, not SSH since network is broken)
ip a
# 5. Can VM ping gateway?
ping 10.0.0.1
# 6. Check for VLAN issues
tcpdump -i vmbr0 -n icmp

Traffic Not Reaching VM

Terminal window
# Check bridge forwarding
sysctl net.bridge.bridge-nf-call-iptables
# Should be 0, or iptables might interfere
# Check VM's interface is in bridge
bridge link show master vmbr0
# Should list tap100i0, tap101i0, etc.
# Capture on bridge
tcpdump -i vmbr0 host 10.0.0.100 -n

VLAN Traffic Not Working

Terminal window
# Check VLAN-aware bridge
bridge vlan show
# Should show VLANs per port:
# vmbr0 1 PVID Egress Untagged
# eno1 1 PVID Egress Untagged
# 100
# 200
# tap100i0 100 PVID Egress Untagged
# If VMs VLAN not listed, check VM config

My Network Layout

Simple Homelab

ISP Router (10.0.0.1)
└── eno1
┌────┴────┐
│ vmbr0 │ 10.0.0.10 (Proxmox)
│ (bridge)│
├─────────┤
│ VM1 │ 10.0.0.101 (VLAN-aware, tag=1)
│ VM2 │ 10.0.0.102 (VLAN-aware, tag=1)
└─────────┘

Flat network. Simple. Everything on same subnet.

Production with VLANs

Core Switch (trunk port)
│ VLANs: 10, 20, 30, 100
└── eno1
┌────┴────────────────────┐
│ vmbr0 │ VLAN-aware
│ (PVID 10 = management) │
├─────────────────────────┤
│ tag=10: Management VMs │
│ tag=20: Production VMs │
│ tag=30: DMZ │
│ tag=100: Storage │
└─────────────────────────┘
Proxmox management: 10.10.0.10/24 (VLAN 10, untagged on bridge)
Production VMs: 10.20.0.0/24 (VLAN 20)
DMZ VMs: 10.30.0.0/24 (VLAN 30)
Storage network: 10.100.0.0/24 (VLAN 100)

One physical NIC, multiple isolated networks.

The Lesson

99% of virtualization network problems are inconsistent Layer 2.

The config looks right. Proxmox is configured. VMs have IPs. But nothing works. Why?

Because somewhere in the chain — switch port, VLAN configuration, bridge settings, VM tag — something doesn’t match.

Virtualization networking requires consistency:

  • Switch port must trunk the right VLANs
  • Bridge must be VLAN-aware if you’re tagging
  • VM must use the correct tag
  • Physical network must route between VLANs (if needed)

When it breaks:

  1. Start at the physical layer (is the cable plugged in?)
  2. Check switch configuration (is VLAN allowed?)
  3. Check bridge configuration (is VLAN-aware enabled?)
  4. Check VM configuration (is tag correct?)
  5. Check inside VM (is interface up?)

The fix is almost always a mismatch somewhere. Find it, fix it, document it so you don’t repeat it.