LXC vs VM: When Containers Are a Gift (and When They Bite)

Proxmox gives you two virtualization options: KVM virtual machines and LXC containers. Both run workloads. Both appear as separate systems. But under the hood, they’re fundamentally different — and that difference matters more than most people realize.

Containers are fast. Boot in seconds, minimal overhead, efficient resource use. VMs are slower to start, use more memory, but provide real isolation.

The question isn’t “which is better” — it’s “which is appropriate.” And getting that wrong means either wasting resources or creating security problems.

How They Actually Work

KVM Virtual Machines

Each VM runs its own kernel. Complete isolation from the host.

Host Kernel (Proxmox)
└── QEMU/KVM Hypervisor
├── VM1 (Linux kernel) ─── Processes
├── VM2 (Windows kernel) ── Processes
└── VM3 (Linux kernel) ─── Processes

The hypervisor virtualizes hardware. Each VM thinks it has its own CPU, memory, disk. Complete isolation — a bug in VM1’s kernel can’t affect VM2.

LXC Containers

All containers share the host kernel. Isolation via namespaces and cgroups.

Host Kernel (Proxmox)
├── Container 1 (namespace) ─── Processes
├── Container 2 (namespace) ─── Processes
└── Container 3 (namespace) ─── Processes

Containers are isolated userspace instances. They share the host kernel, just in different namespaces. Faster, lighter — but a kernel vulnerability affects everything.

Performance Comparison

AspectLXC ContainerKVM VM
Boot time1-5 seconds15-60 seconds
Memory overhead~10-50MB~200-500MB
CPU overheadNear zero2-5%
Disk I/ONative speedNear-native (virtio)
Network I/ONative speedNear-native (virtio)
Density50-100+ per host10-30 per host

For equivalent workload, containers use significantly fewer resources.

Creating Containers

Download Template

Terminal window
# List available templates
pveam available
# Download Ubuntu template
pveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zst

Or via Web UI: Datacenter → local → CT Templates → Templates

Create Container

Terminal window
pct create 200 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst \
--hostname web-container \
--memory 1024 \
--cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=10.0.0.200/24,gw=10.0.0.1 \
--storage local-zfs \
--rootfs local-zfs:8 \
--password "temporary" \
--unprivileged 1
# Start container
pct start 200
# Enter container
pct enter 200

Via Web UI: Create CT → follow wizard.

Unprivileged vs Privileged

Unprivileged (default, recommended):

Terminal window
pct create 200 ... --unprivileged 1
  • Container root (UID 0) maps to unprivileged user on host (UID 100000+)
  • Even if container is compromised, attacker can’t escalate to host root
  • Some things don’t work (NFS mounts, raw disk access)

Privileged:

Terminal window
pct create 200 ... --unprivileged 0
  • Container root is host root (UID 0)
  • Container escape = host root access
  • Needed for: NFS, some filesystems, hardware passthrough
  • Use only when necessary, with additional security

Security Boundaries

This is where the choice matters most.

Container Security Reality

Containers share the kernel. This means:

Kernel vulnerability
Affects host AND all containers
Container escape possible

Real examples:

  • Dirty COW (CVE-2016-5195): Write to read-only memory. Container escape.
  • Dirty Pipe (CVE-2022-0847): Overwrite files. Container escape.
  • Various cgroup escapes: Break out of isolation.

Containers are NOT a security boundary against malicious actors. They’re a convenience boundary for trusted workloads.

VM Security Reality

VMs have their own kernel. Attacker must escape hypervisor:

Guest kernel vulnerability
Only affects that VM
Hypervisor escape required for host access

Hypervisor escapes exist (Spectre, Meltdown, VENOM) but are rarer and usually patched quickly. VMs are a real security boundary.

When Security Matters

Use VMs when:

  • Running untrusted code
  • Multi-tenant (different customers)
  • Security-critical workloads
  • Compliance requirements (PCI-DSS, HIPAA often require VMs)
  • Windows workloads
  • Different OS requirements

Use containers when:

  • Single-tenant (all your own workloads)
  • Trusted code only
  • Resource efficiency matters more than perfect isolation
  • Linux-only workloads
  • Development environments

Practical Use Cases

Container-Appropriate

Pi-hole DNS:

Terminal window
pct create 201 local:vztmpl/debian-12-standard_12.5-1_amd64.tar.zst \
--hostname pihole \
--memory 512 \
--cores 1 \
--net0 name=eth0,bridge=vmbr0,ip=10.0.0.53/24,gw=10.0.0.1 \
--unprivileged 1

DNS is trusted, internal, lightweight. Perfect for container.

Reverse proxy (nginx/traefik):

Terminal window
pct create 202 ... --hostname proxy --memory 256

Forwards traffic, minimal state. Container is ideal.

Internal monitoring (Prometheus, Grafana):

Internal tools, trusted environment. Containers save resources.

VM-Appropriate

Database server:

Terminal window
qm create 300 --name db-server --memory 8192 --cores 4 ...

Critical data. If something breaks, don’t risk it affecting other workloads.

Customer-facing web application:

Untrusted input from internet. VM provides real isolation.

Windows anything:

Terminal window
qm create 301 --name windows-server --ostype win11 ...

Windows doesn’t run in LXC. VMs only.

Kubernetes nodes:

Docker-in-LXC works but is fragile. VMs are more reliable for k8s.

Container Features

Bind Mounts

Share host directories with container:

Terminal window
pct set 200 --mp0 /data/shared,mp=/mnt/shared

Useful for config management, but widens attack surface.

Device Passthrough

For hardware access (requires privileged or specific config):

Terminal window
# GPU passthrough
pct set 200 --features nesting=1
echo "lxc.cgroup2.devices.allow: c 195:* rwm" >> /etc/pve/lxc/200.conf
echo "lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file" >> /etc/pve/lxc/200.conf

Complex and fragile. Consider VM for hardware passthrough.

Nesting (Docker in LXC)

Run Docker inside container:

Terminal window
pct set 200 --features nesting=1

Works for simple cases. For production Docker workloads, use VMs.

Resource Limits

Container Limits

Terminal window
# Memory (hard limit)
pct set 200 --memory 2048
# CPU cores
pct set 200 --cores 2
# CPU limit (percentage)
pct set 200 --cpulimit 1.5 # Max 1.5 cores worth
# Disk quota
pct resize 200 rootfs 20G

VM Limits

Terminal window
# Memory
qm set 100 --memory 4096
# Balloonig (dynamic memory)
qm set 100 --balloon 2048 # Minimum, can grow to --memory
# CPU
qm set 100 --cores 4 --sockets 1
# CPU type
qm set 100 --cpu host # Pass through host CPU features

Migration

Container Migration

Fast — only filesystem moves:

Terminal window
# Offline (stopped)
pct migrate 200 pve2
# Online (running) - requires shared storage
pct migrate 200 pve2 --online

VM Migration

Slower — memory state must transfer:

Terminal window
# Offline
qm migrate 100 pve2
# Live migration - requires shared storage
qm migrate 100 pve2 --online

Backup and Restore

Both work similarly:

Terminal window
# Backup container
vzdump 200 --storage backup --mode snapshot
# Backup VM
vzdump 100 --storage backup --mode snapshot
# Restore
pct restore 200 /backup/vzdump-lxc-200-*.tar.zst
qm restore 100 /backup/vzdump-qemu-100-*.vma.zst

Containers backup faster (smaller, no memory state).

Hybrid Approach

In practice, use both:

Production Layout:
├── LXC Containers (internal services)
│ ├── 200: pihole (DNS)
│ ├── 201: nginx (reverse proxy)
│ ├── 202: prometheus (monitoring)
│ └── 203: grafana (dashboards)
└── VMs (security-sensitive)
├── 100: web-app (internet-facing)
├── 101: database (critical data)
├── 102: backup-server (recovery)
└── 103: windows-dc (Active Directory)

Containers for internal, trusted, lightweight. VMs for external, critical, or Windows.

Troubleshooting

Container Won’t Start

Terminal window
# Check logs
pct start 200 --debug
# Common issues:
# - AppArmor blocking: check /var/log/kern.log
# - Disk full: check storage
# - Network collision: check IP conflicts

Container Networking Issues

Terminal window
# Enter container
pct enter 200
# Check interface
ip a
# Check gateway
ip route
# From host, check bridge
brctl show vmbr0

Unprivileged Container Limitations

If something fails in unprivileged container:

Terminal window
# Try enabling features
pct set 200 --features nesting=1,keyctl=1
# If still fails, might need privileged
# Consider: is this really appropriate for a container?

The Lesson

Containers are speed, but not always isolation.

The temptation is to use containers everywhere — they’re faster, lighter, easier. But containers share a kernel. That kernel is your security boundary. A container escape becomes a host compromise.

When to contain:

  • Trusted workloads
  • Single-owner environment
  • Resource efficiency priority
  • Linux services
  • Internal tools

When to virtualize:

  • Untrusted inputs
  • Multi-tenant
  • Security-critical
  • Windows
  • Compliance requirements

The hybrid approach works best: containers for the lightweight stuff, VMs for what matters. Don’t let container efficiency seduce you into container-izing everything. Sometimes the VM overhead is the security boundary you need.