Storage 101: Local, ZFS, LVM-thin — What I Actually Use and Why

Storage decisions in Proxmox affect everything downstream. Choose wrong and you’re either rebuilding later or living with limitations. Choose right and you forget storage exists — it just works.

The problem is “right” depends on your use case. ZFS is amazing until your 8GB RAM server starts swapping. LVM-thin is fast until you need to migrate VMs. Directory storage is simple until you want snapshots.

This is what I actually use and why, after trying all of them.

Storage Types in Proxmox

Proxmox supports multiple storage backends. Each has trade-offs:

TypeSnapshotsLive BackupThin ProvisioningNotes
DirectoryNo*YesNoSimple, on any filesystem
LVMNoYesNoBlock storage, no snapshots
LVM-thinYesYesYesBlock storage with thin volumes
ZFSYesYesYesBest features, needs RAM
CephYesYesYesDistributed, complex
NFS/CIFSNo*YesDependsNetwork storage

*Can use qcow2 format for snapshots, but slower

What Gets Stored Where

Before diving into backends, understand what you’re storing:

  • VM Disks: The actual virtual hard drives. Performance critical.
  • ISO Images: Installation media. Read-once, performance doesn’t matter.
  • Container Templates: LXC images. Small, read occasionally.
  • Backups: Compressed VM snapshots. Large, written sequentially.
  • Snippets: Cloud-init configs, hook scripts. Tiny files.

Not everything needs fast storage. Putting ISOs on your NVMe ZFS pool wastes space.

Directory Storage

The simplest option. Just a folder on a filesystem.

Terminal window
# Default directories after install
/var/lib/vz/template/iso # ISO images
/var/lib/vz/template/cache # Container templates
/var/lib/vz/dump # Backups

When to Use Directory Storage

  • ISO images (read once during install)
  • Container templates (read once during creation)
  • Backups (sequential writes, then archive)
  • Small deployments where simplicity matters

Directory Limitations

  • No atomic snapshots (unless using qcow2 format, which is slower)
  • No thin provisioning — disk images use actual space
  • Performance depends entirely on underlying filesystem

Adding Directory Storage

Terminal window
# Create directory
mkdir -p /mnt/storage/proxmox
# Add to Proxmox
pvesm add dir backup-storage --path /mnt/storage/proxmox --content backup,iso,vztmpl

Or via Web UI: Datacenter → Storage → Add → Directory

LVM-thin

LVM with thin provisioning. You allocate a pool, then create thin volumes that share space.

Physical Disk (500GB)
└── Volume Group: pve
└── Thin Pool: data (400GB allocated)
├── VM 100 disk (100GB virtual, 20GB actual)
├── VM 101 disk (100GB virtual, 35GB actual)
└── VM 102 disk (100GB virtual, 15GB actual)
→ Total actual usage: 70GB in 400GB pool

LVM-thin Advantages

  • Thin provisioning: Allocate more than you have, pay for what you use
  • Snapshots: LVM snapshots work (with caveats)
  • Speed: Direct block access, no filesystem overhead
  • Low memory: No significant RAM overhead

LVM-thin Disadvantages

  • No checksums: Data corruption is silent
  • Snapshot overhead: Snapshots slow down writes
  • Pool can fill: Over-provisioning requires monitoring
  • Migration complexity: Moving thin volumes isn’t trivial

Default LVM-thin Setup

Proxmox installer creates this automatically:

Terminal window
# Check LVM-thin pool
lvs
# NAME VG Attr LSize
# data pve twi-aotz-- 400g
# Check thin pool usage
lvs -o+data_percent

When LVM-thin Pool Fills

This is the danger zone. If your thin pool hits 100%, VMs pause or corrupt.

Monitor it:

Terminal window
# Check usage
lvs -o name,size,data_percent pve/data
# Set up alert (add to cron)
USAGE=$(lvs --noheadings -o data_percent pve/data | tr -d ' %')
if [ "$USAGE" -gt 80 ]; then
echo "LVM thin pool at ${USAGE}%" | mail -s "Storage Alert" admin@example.com
fi

ZFS

My preferred choice for most deployments. ZFS is a filesystem and volume manager combined.

ZFS Advantages

  • Checksums: Every block is verified, silent corruption is detected
  • Snapshots: Instant, cheap, no performance penalty during creation
  • Compression: lz4 compression is basically free (often faster than uncompressed!)
  • Send/Receive: Efficient replication to another system
  • Self-healing: With redundancy, bad blocks are automatically repaired

ZFS Disadvantages

  • RAM hungry: Wants 1GB+ per TB of storage for optimal ARC
  • CPU for compression: Minimal with lz4, noticeable with zstd
  • Complexity: More knobs to understand
  • No shrink: Can’t reduce pool size

ZFS Pool Status

Terminal window
# Pool health
zpool status
# pool: rpool
# state: ONLINE
# config:
# NAME STATE READ WRITE CKSUM
# rpool ONLINE 0 0 0
# nvme0n1p3 ONLINE 0 0 0
# Space usage
zfs list
# NAME USED AVAIL REFER MOUNTPOINT
# rpool 120G 280G 96K /rpool
# rpool/ROOT 50G 280G 96K /rpool/ROOT
# rpool/data 70G 280G 96K /rpool/data
# Check compression ratio
zfs get compressratio rpool/data

Tuning ZFS for Proxmox

Limit ARC to leave RAM for VMs:

Terminal window
# Check current ARC size
arc_summary | grep "ARC size"
# Limit to 4GB (adjust based on your RAM)
echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.conf
update-initramfs -u
reboot

Rule of thumb: Give ARC 1GB per TB of storage, minimum 1GB, maximum 50% of RAM.

Adding ZFS Storage

Terminal window
# Create new pool on separate disk
zpool create -o ashift=12 tank /dev/sdb
# Enable compression
zfs set compression=lz4 tank
# Create dataset for VMs
zfs create tank/vms
# Add to Proxmox
pvesm add zfspool tank-vms -pool tank/vms --content images,rootdir

My Actual Setup

Single Node Homelab

NVMe 500GB (rpool)
├── rpool/ROOT # Proxmox OS (50GB)
└── rpool/data # VM disks (ZFS, compression, snapshots)
SATA SSD 1TB (tank)
└── tank/backups # Backups (directory storage)

Why:

  • ZFS for VM disks — I want snapshots and checksums
  • Separate disk for backups — if rpool dies, backups survive
  • Compression saves 20-40% space on typical workloads

Production Cluster

NVMe 256GB (rpool)
└── rpool/ROOT # OS only, small and fast
2x SATA SSD 1TB (mirror, vmpool)
└── vmpool/data # VM disks with redundancy
2x HDD 4TB (mirror, backup)
└── backup/proxmox # Proxmox Backup Server storage

Why:

  • Separate OS from VMs — OS disk failure doesn’t lose VMs
  • Mirrors for redundancy — single disk failure = no downtime
  • HDDs for backups — capacity over speed, write once read rarely

Snapshots vs Backups

This is where people get confused.

Snapshots Are Not Backups

A snapshot is a point-in-time view stored on the same disk:

Terminal window
# Create ZFS snapshot
zfs snapshot rpool/data/vm-100-disk-0@before-upgrade
# List snapshots
zfs list -t snapshot
# Rollback
zfs rollback rpool/data/vm-100-disk-0@before-upgrade

Snapshots are:

  • Instant: No performance penalty to create
  • Same disk: If disk dies, snapshots die too
  • For rollback: Made a bad change? Roll back in seconds

Snapshots are NOT:

  • Off-site: They’re on the same physical disk
  • Disaster recovery: Disk failure loses everything
  • Long-term retention: Too many snapshots = space + performance issues

Backups Are Copies Elsewhere

Terminal window
# Proxmox backup (stores on backup storage)
vzdump 100 --storage backup-storage --mode snapshot
# ZFS send to another system
zfs send rpool/data/vm-100-disk-0@backup | ssh backup-server zfs recv tank/backups/vm-100

Backups are:

  • Off-system: Different disk, different machine, different building
  • Disaster recovery: Original dies, restore from backup
  • Long-term: Keep 30 days, 12 weeks, whatever you need

Use both. Snapshots for quick rollbacks (before upgrades, config changes). Backups for disaster recovery.

Storage Performance

Testing Your Storage

Before putting workloads on storage, benchmark it:

Terminal window
# Install fio
apt install fio
# Random 4K writes (database-like)
fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting --filename=/rpool/data/test.fio
# Sequential writes (backup-like)
fio --name=seq-write --ioengine=libaio --iodepth=1 --rw=write --bs=1m --direct=1 --size=4G --numjobs=1 --runtime=60 --group_reporting --filename=/rpool/data/test.fio
# Cleanup
rm /rpool/data/test.fio

Typical numbers:

  • NVMe: 500K+ IOPS random, 3GB/s+ sequential
  • SATA SSD: 50K IOPS random, 500MB/s sequential
  • HDD: 150 IOPS random, 150MB/s sequential

Fast Often Means Fragile

High-performance storage often sacrifices safety:

  • NVMe without power loss protection: Data corruption on power loss
  • Write caching without battery backup: Same problem
  • Consumer SSDs: Not designed for write-heavy workloads

For VMs that matter, use enterprise SSDs with power loss protection or ZFS with a proper setup (mirrors, proper RAM, UPS).

Storage Migration

Need to move VMs between storage backends?

Online Migration (VM Running)

Terminal window
# Move disk to different storage
qm move_disk 100 scsi0 target-storage
# Or via Web UI: VM → Hardware → Disk → Move Storage

Offline Migration

Terminal window
# Stop VM
qm stop 100
# Export/import
qm export 100 /tmp/vm-100.tar.gz
qm import 100 /tmp/vm-100.tar.gz target-storage

ZFS Send/Receive

For ZFS-to-ZFS, this is most efficient:

Terminal window
# Send to remote
zfs send rpool/data/vm-100-disk-0@migrate | ssh target-host zfs recv tank/data/vm-100-disk-0

The Lesson

Snapshots are not backups. And fast often means fragile.

Storage is where data lives. Get it wrong and you lose everything. The temptation is to optimize for speed — NVMe everything, no redundancy, maximum performance.

Then a disk fails. Or worse, corrupts silently. And you discover that your snapshots were on the same disk that died.

My approach:

  1. ZFS for data that matters — checksums catch corruption
  2. Mirrors for production — single disk failure = no panic
  3. Separate backup storage — not on the same disk, not on the same host
  4. Test restores — a backup you haven’t restored is a backup you hope works

The boring, redundant setup survives. The fast, minimal setup survives until it doesn’t.