Proxmox gives you two virtualization options: KVM virtual machines and LXC containers. Both run workloads. Both appear as separate systems. But under the hood, they’re fundamentally different — and that difference matters more than most people realize.
Containers are fast. Boot in seconds, minimal overhead, efficient resource use. VMs are slower to start, use more memory, but provide real isolation.
The question isn’t “which is better” — it’s “which is appropriate.” And getting that wrong means either wasting resources or creating security problems.
How They Actually Work
KVM Virtual Machines
Each VM runs its own kernel. Complete isolation from the host.
Host Kernel (Proxmox) │ └── QEMU/KVM Hypervisor │ ├── VM1 (Linux kernel) ─── Processes ├── VM2 (Windows kernel) ── Processes └── VM3 (Linux kernel) ─── ProcessesThe hypervisor virtualizes hardware. Each VM thinks it has its own CPU, memory, disk. Complete isolation — a bug in VM1’s kernel can’t affect VM2.
LXC Containers
All containers share the host kernel. Isolation via namespaces and cgroups.
Host Kernel (Proxmox) │ ├── Container 1 (namespace) ─── Processes ├── Container 2 (namespace) ─── Processes └── Container 3 (namespace) ─── ProcessesContainers are isolated userspace instances. They share the host kernel, just in different namespaces. Faster, lighter — but a kernel vulnerability affects everything.
Performance Comparison
| Aspect | LXC Container | KVM VM |
|---|---|---|
| Boot time | 1-5 seconds | 15-60 seconds |
| Memory overhead | ~10-50MB | ~200-500MB |
| CPU overhead | Near zero | 2-5% |
| Disk I/O | Native speed | Near-native (virtio) |
| Network I/O | Native speed | Near-native (virtio) |
| Density | 50-100+ per host | 10-30 per host |
For equivalent workload, containers use significantly fewer resources.
Creating Containers
Download Template
# List available templatespveam available
# Download Ubuntu templatepveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zstOr via Web UI: Datacenter → local → CT Templates → Templates
Create Container
pct create 200 local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst \ --hostname web-container \ --memory 1024 \ --cores 2 \ --net0 name=eth0,bridge=vmbr0,ip=10.0.0.200/24,gw=10.0.0.1 \ --storage local-zfs \ --rootfs local-zfs:8 \ --password "temporary" \ --unprivileged 1
# Start containerpct start 200
# Enter containerpct enter 200Via Web UI: Create CT → follow wizard.
Unprivileged vs Privileged
Unprivileged (default, recommended):
pct create 200 ... --unprivileged 1- Container root (UID 0) maps to unprivileged user on host (UID 100000+)
- Even if container is compromised, attacker can’t escalate to host root
- Some things don’t work (NFS mounts, raw disk access)
Privileged:
pct create 200 ... --unprivileged 0- Container root is host root (UID 0)
- Container escape = host root access
- Needed for: NFS, some filesystems, hardware passthrough
- Use only when necessary, with additional security
Security Boundaries
This is where the choice matters most.
Container Security Reality
Containers share the kernel. This means:
Kernel vulnerability ↓Affects host AND all containers ↓Container escape possibleReal examples:
- Dirty COW (CVE-2016-5195): Write to read-only memory. Container escape.
- Dirty Pipe (CVE-2022-0847): Overwrite files. Container escape.
- Various cgroup escapes: Break out of isolation.
Containers are NOT a security boundary against malicious actors. They’re a convenience boundary for trusted workloads.
VM Security Reality
VMs have their own kernel. Attacker must escape hypervisor:
Guest kernel vulnerability ↓Only affects that VM ↓Hypervisor escape required for host accessHypervisor escapes exist (Spectre, Meltdown, VENOM) but are rarer and usually patched quickly. VMs are a real security boundary.
When Security Matters
Use VMs when:
- Running untrusted code
- Multi-tenant (different customers)
- Security-critical workloads
- Compliance requirements (PCI-DSS, HIPAA often require VMs)
- Windows workloads
- Different OS requirements
Use containers when:
- Single-tenant (all your own workloads)
- Trusted code only
- Resource efficiency matters more than perfect isolation
- Linux-only workloads
- Development environments
Practical Use Cases
Container-Appropriate
Pi-hole DNS:
pct create 201 local:vztmpl/debian-12-standard_12.5-1_amd64.tar.zst \ --hostname pihole \ --memory 512 \ --cores 1 \ --net0 name=eth0,bridge=vmbr0,ip=10.0.0.53/24,gw=10.0.0.1 \ --unprivileged 1DNS is trusted, internal, lightweight. Perfect for container.
Reverse proxy (nginx/traefik):
pct create 202 ... --hostname proxy --memory 256Forwards traffic, minimal state. Container is ideal.
Internal monitoring (Prometheus, Grafana):
Internal tools, trusted environment. Containers save resources.
VM-Appropriate
Database server:
qm create 300 --name db-server --memory 8192 --cores 4 ...Critical data. If something breaks, don’t risk it affecting other workloads.
Customer-facing web application:
Untrusted input from internet. VM provides real isolation.
Windows anything:
qm create 301 --name windows-server --ostype win11 ...Windows doesn’t run in LXC. VMs only.
Kubernetes nodes:
Docker-in-LXC works but is fragile. VMs are more reliable for k8s.
Container Features
Bind Mounts
Share host directories with container:
pct set 200 --mp0 /data/shared,mp=/mnt/sharedUseful for config management, but widens attack surface.
Device Passthrough
For hardware access (requires privileged or specific config):
# GPU passthroughpct set 200 --features nesting=1echo "lxc.cgroup2.devices.allow: c 195:* rwm" >> /etc/pve/lxc/200.confecho "lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file" >> /etc/pve/lxc/200.confComplex and fragile. Consider VM for hardware passthrough.
Nesting (Docker in LXC)
Run Docker inside container:
pct set 200 --features nesting=1Works for simple cases. For production Docker workloads, use VMs.
Resource Limits
Container Limits
# Memory (hard limit)pct set 200 --memory 2048
# CPU corespct set 200 --cores 2
# CPU limit (percentage)pct set 200 --cpulimit 1.5 # Max 1.5 cores worth
# Disk quotapct resize 200 rootfs 20GVM Limits
# Memoryqm set 100 --memory 4096
# Balloonig (dynamic memory)qm set 100 --balloon 2048 # Minimum, can grow to --memory
# CPUqm set 100 --cores 4 --sockets 1
# CPU typeqm set 100 --cpu host # Pass through host CPU featuresMigration
Container Migration
Fast — only filesystem moves:
# Offline (stopped)pct migrate 200 pve2
# Online (running) - requires shared storagepct migrate 200 pve2 --onlineVM Migration
Slower — memory state must transfer:
# Offlineqm migrate 100 pve2
# Live migration - requires shared storageqm migrate 100 pve2 --onlineBackup and Restore
Both work similarly:
# Backup containervzdump 200 --storage backup --mode snapshot
# Backup VMvzdump 100 --storage backup --mode snapshot
# Restorepct restore 200 /backup/vzdump-lxc-200-*.tar.zstqm restore 100 /backup/vzdump-qemu-100-*.vma.zstContainers backup faster (smaller, no memory state).
Hybrid Approach
In practice, use both:
Production Layout:├── LXC Containers (internal services)│ ├── 200: pihole (DNS)│ ├── 201: nginx (reverse proxy)│ ├── 202: prometheus (monitoring)│ └── 203: grafana (dashboards)│└── VMs (security-sensitive) ├── 100: web-app (internet-facing) ├── 101: database (critical data) ├── 102: backup-server (recovery) └── 103: windows-dc (Active Directory)Containers for internal, trusted, lightweight. VMs for external, critical, or Windows.
Troubleshooting
Container Won’t Start
# Check logspct start 200 --debug
# Common issues:# - AppArmor blocking: check /var/log/kern.log# - Disk full: check storage# - Network collision: check IP conflictsContainer Networking Issues
# Enter containerpct enter 200
# Check interfaceip a
# Check gatewayip route
# From host, check bridgebrctl show vmbr0Unprivileged Container Limitations
If something fails in unprivileged container:
# Try enabling featurespct set 200 --features nesting=1,keyctl=1
# If still fails, might need privileged# Consider: is this really appropriate for a container?The Lesson
Containers are speed, but not always isolation.
The temptation is to use containers everywhere — they’re faster, lighter, easier. But containers share a kernel. That kernel is your security boundary. A container escape becomes a host compromise.
When to contain:
- Trusted workloads
- Single-owner environment
- Resource efficiency priority
- Linux services
- Internal tools
When to virtualize:
- Untrusted inputs
- Multi-tenant
- Security-critical
- Windows
- Compliance requirements
The hybrid approach works best: containers for the lightweight stuff, VMs for what matters. Don’t let container efficiency seduce you into container-izing everything. Sometimes the VM overhead is the security boundary you need.