Storage decisions in Proxmox affect everything downstream. Choose wrong and you’re either rebuilding later or living with limitations. Choose right and you forget storage exists — it just works.
The problem is “right” depends on your use case. ZFS is amazing until your 8GB RAM server starts swapping. LVM-thin is fast until you need to migrate VMs. Directory storage is simple until you want snapshots.
This is what I actually use and why, after trying all of them.
Storage Types in Proxmox
Proxmox supports multiple storage backends. Each has trade-offs:
| Type | Snapshots | Live Backup | Thin Provisioning | Notes |
|---|---|---|---|---|
| Directory | No* | Yes | No | Simple, on any filesystem |
| LVM | No | Yes | No | Block storage, no snapshots |
| LVM-thin | Yes | Yes | Yes | Block storage with thin volumes |
| ZFS | Yes | Yes | Yes | Best features, needs RAM |
| Ceph | Yes | Yes | Yes | Distributed, complex |
| NFS/CIFS | No* | Yes | Depends | Network storage |
*Can use qcow2 format for snapshots, but slower
What Gets Stored Where
Before diving into backends, understand what you’re storing:
- VM Disks: The actual virtual hard drives. Performance critical.
- ISO Images: Installation media. Read-once, performance doesn’t matter.
- Container Templates: LXC images. Small, read occasionally.
- Backups: Compressed VM snapshots. Large, written sequentially.
- Snippets: Cloud-init configs, hook scripts. Tiny files.
Not everything needs fast storage. Putting ISOs on your NVMe ZFS pool wastes space.
Directory Storage
The simplest option. Just a folder on a filesystem.
# Default directories after install/var/lib/vz/template/iso # ISO images/var/lib/vz/template/cache # Container templates/var/lib/vz/dump # BackupsWhen to Use Directory Storage
- ISO images (read once during install)
- Container templates (read once during creation)
- Backups (sequential writes, then archive)
- Small deployments where simplicity matters
Directory Limitations
- No atomic snapshots (unless using qcow2 format, which is slower)
- No thin provisioning — disk images use actual space
- Performance depends entirely on underlying filesystem
Adding Directory Storage
# Create directorymkdir -p /mnt/storage/proxmox
# Add to Proxmoxpvesm add dir backup-storage --path /mnt/storage/proxmox --content backup,iso,vztmplOr via Web UI: Datacenter → Storage → Add → Directory
LVM-thin
LVM with thin provisioning. You allocate a pool, then create thin volumes that share space.
Physical Disk (500GB)└── Volume Group: pve └── Thin Pool: data (400GB allocated) ├── VM 100 disk (100GB virtual, 20GB actual) ├── VM 101 disk (100GB virtual, 35GB actual) └── VM 102 disk (100GB virtual, 15GB actual) → Total actual usage: 70GB in 400GB poolLVM-thin Advantages
- Thin provisioning: Allocate more than you have, pay for what you use
- Snapshots: LVM snapshots work (with caveats)
- Speed: Direct block access, no filesystem overhead
- Low memory: No significant RAM overhead
LVM-thin Disadvantages
- No checksums: Data corruption is silent
- Snapshot overhead: Snapshots slow down writes
- Pool can fill: Over-provisioning requires monitoring
- Migration complexity: Moving thin volumes isn’t trivial
Default LVM-thin Setup
Proxmox installer creates this automatically:
# Check LVM-thin poollvs# NAME VG Attr LSize# data pve twi-aotz-- 400g
# Check thin pool usagelvs -o+data_percentWhen LVM-thin Pool Fills
This is the danger zone. If your thin pool hits 100%, VMs pause or corrupt.
Monitor it:
# Check usagelvs -o name,size,data_percent pve/data
# Set up alert (add to cron)USAGE=$(lvs --noheadings -o data_percent pve/data | tr -d ' %')if [ "$USAGE" -gt 80 ]; then echo "LVM thin pool at ${USAGE}%" | mail -s "Storage Alert" admin@example.comfiZFS
My preferred choice for most deployments. ZFS is a filesystem and volume manager combined.
ZFS Advantages
- Checksums: Every block is verified, silent corruption is detected
- Snapshots: Instant, cheap, no performance penalty during creation
- Compression: lz4 compression is basically free (often faster than uncompressed!)
- Send/Receive: Efficient replication to another system
- Self-healing: With redundancy, bad blocks are automatically repaired
ZFS Disadvantages
- RAM hungry: Wants 1GB+ per TB of storage for optimal ARC
- CPU for compression: Minimal with lz4, noticeable with zstd
- Complexity: More knobs to understand
- No shrink: Can’t reduce pool size
ZFS Pool Status
# Pool healthzpool status# pool: rpool# state: ONLINE# config:# NAME STATE READ WRITE CKSUM# rpool ONLINE 0 0 0# nvme0n1p3 ONLINE 0 0 0
# Space usagezfs list# NAME USED AVAIL REFER MOUNTPOINT# rpool 120G 280G 96K /rpool# rpool/ROOT 50G 280G 96K /rpool/ROOT# rpool/data 70G 280G 96K /rpool/data
# Check compression ratiozfs get compressratio rpool/dataTuning ZFS for Proxmox
Limit ARC to leave RAM for VMs:
# Check current ARC sizearc_summary | grep "ARC size"
# Limit to 4GB (adjust based on your RAM)echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.confupdate-initramfs -urebootRule of thumb: Give ARC 1GB per TB of storage, minimum 1GB, maximum 50% of RAM.
Adding ZFS Storage
# Create new pool on separate diskzpool create -o ashift=12 tank /dev/sdb
# Enable compressionzfs set compression=lz4 tank
# Create dataset for VMszfs create tank/vms
# Add to Proxmoxpvesm add zfspool tank-vms -pool tank/vms --content images,rootdirMy Actual Setup
Single Node Homelab
NVMe 500GB (rpool)├── rpool/ROOT # Proxmox OS (50GB)└── rpool/data # VM disks (ZFS, compression, snapshots)
SATA SSD 1TB (tank)└── tank/backups # Backups (directory storage)Why:
- ZFS for VM disks — I want snapshots and checksums
- Separate disk for backups — if rpool dies, backups survive
- Compression saves 20-40% space on typical workloads
Production Cluster
NVMe 256GB (rpool)└── rpool/ROOT # OS only, small and fast
2x SATA SSD 1TB (mirror, vmpool)└── vmpool/data # VM disks with redundancy
2x HDD 4TB (mirror, backup)└── backup/proxmox # Proxmox Backup Server storageWhy:
- Separate OS from VMs — OS disk failure doesn’t lose VMs
- Mirrors for redundancy — single disk failure = no downtime
- HDDs for backups — capacity over speed, write once read rarely
Snapshots vs Backups
This is where people get confused.
Snapshots Are Not Backups
A snapshot is a point-in-time view stored on the same disk:
# Create ZFS snapshotzfs snapshot rpool/data/vm-100-disk-0@before-upgrade
# List snapshotszfs list -t snapshot
# Rollbackzfs rollback rpool/data/vm-100-disk-0@before-upgradeSnapshots are:
- Instant: No performance penalty to create
- Same disk: If disk dies, snapshots die too
- For rollback: Made a bad change? Roll back in seconds
Snapshots are NOT:
- Off-site: They’re on the same physical disk
- Disaster recovery: Disk failure loses everything
- Long-term retention: Too many snapshots = space + performance issues
Backups Are Copies Elsewhere
# Proxmox backup (stores on backup storage)vzdump 100 --storage backup-storage --mode snapshot
# ZFS send to another systemzfs send rpool/data/vm-100-disk-0@backup | ssh backup-server zfs recv tank/backups/vm-100Backups are:
- Off-system: Different disk, different machine, different building
- Disaster recovery: Original dies, restore from backup
- Long-term: Keep 30 days, 12 weeks, whatever you need
Use both. Snapshots for quick rollbacks (before upgrades, config changes). Backups for disaster recovery.
Storage Performance
Testing Your Storage
Before putting workloads on storage, benchmark it:
# Install fioapt install fio
# Random 4K writes (database-like)fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --group_reporting --filename=/rpool/data/test.fio
# Sequential writes (backup-like)fio --name=seq-write --ioengine=libaio --iodepth=1 --rw=write --bs=1m --direct=1 --size=4G --numjobs=1 --runtime=60 --group_reporting --filename=/rpool/data/test.fio
# Cleanuprm /rpool/data/test.fioTypical numbers:
- NVMe: 500K+ IOPS random, 3GB/s+ sequential
- SATA SSD: 50K IOPS random, 500MB/s sequential
- HDD: 150 IOPS random, 150MB/s sequential
Fast Often Means Fragile
High-performance storage often sacrifices safety:
- NVMe without power loss protection: Data corruption on power loss
- Write caching without battery backup: Same problem
- Consumer SSDs: Not designed for write-heavy workloads
For VMs that matter, use enterprise SSDs with power loss protection or ZFS with a proper setup (mirrors, proper RAM, UPS).
Storage Migration
Need to move VMs between storage backends?
Online Migration (VM Running)
# Move disk to different storageqm move_disk 100 scsi0 target-storage
# Or via Web UI: VM → Hardware → Disk → Move StorageOffline Migration
# Stop VMqm stop 100
# Export/importqm export 100 /tmp/vm-100.tar.gzqm import 100 /tmp/vm-100.tar.gz target-storageZFS Send/Receive
For ZFS-to-ZFS, this is most efficient:
# Send to remotezfs send rpool/data/vm-100-disk-0@migrate | ssh target-host zfs recv tank/data/vm-100-disk-0The Lesson
Snapshots are not backups. And fast often means fragile.
Storage is where data lives. Get it wrong and you lose everything. The temptation is to optimize for speed — NVMe everything, no redundancy, maximum performance.
Then a disk fails. Or worse, corrupts silently. And you discover that your snapshots were on the same disk that died.
My approach:
- ZFS for data that matters — checksums catch corruption
- Mirrors for production — single disk failure = no panic
- Separate backup storage — not on the same disk, not on the same host
- Test restores — a backup you haven’t restored is a backup you hope works
The boring, redundant setup survives. The fast, minimal setup survives until it doesn’t.