I’ve run ESXi, Hyper-V, oVirt, and plain KVM with libvirt. They all work. But when Broadcom acquired VMware and started the licensing chaos, I moved everything to Proxmox. Not because it’s trendy — because it’s boring in the best way.
Proxmox is Debian with a web UI and good defaults. When things break (and they will), you’re debugging Linux, not a proprietary hypervisor. The skills transfer. The logs make sense. The community has seen your problem before.
This is how to install Proxmox in a way that doesn’t create pain later.
Why Proxmox
It’s just Debian. Under the web UI, it’s apt, systemd, and standard Linux networking. When the UI doesn’t do what you need, drop to the shell.
ZFS first-class. Built-in ZFS support with proper integration. Snapshots, replication, compression — all accessible from the UI.
No licensing games. The “enterprise” repository requires a subscription, but the no-subscription repository works fine. You’re not crippled without paying.
Clustering is free. Three nodes, shared storage, HA — no extra licenses.
Both VMs and containers. KVM for full VMs, LXC for lightweight containers. Same management interface.
Before You Install
Hardware Considerations
CPU: Intel or AMD with virtualization extensions (VT-x/AMD-V). Check BIOS — these are sometimes disabled by default.
RAM: Minimum 8GB for the host, but realistically 32GB+ for anything useful. ECC recommended for ZFS, not required.
Storage:
- Boot drive: SSD, 32GB minimum (128GB comfortable)
- VM storage: Separate drive(s), SSD strongly preferred
- ZFS: Wants multiple drives for redundancy
Network: Dedicated NIC for management, additional for VM traffic if you’re serious.
The Decision: ZFS vs LVM vs ext4
This is the first fork in the road. Choose wrong and you’ll reinstall later.
ZFS (my choice for most cases):
- Built-in checksums, catches silent corruption
- Snapshots are instant and cheap
- Compression saves space with minimal CPU overhead
- Replication to another node is trivial
- Requires more RAM (1GB per TB of storage, roughly)
- Single-disk ZFS works fine, just no redundancy
LVM-thin:
- Less RAM overhead
- Snapshots work but less elegant
- No checksums
- Familiar if you know LVM
- Good choice for simple setups or low-RAM systems
ext4/XFS on raw disk:
- Simplest
- No snapshots without external tools
- Fine for the boot drive if VMs live elsewhere
My recommendation: ZFS unless you have less than 16GB RAM or specific reasons not to. The data integrity alone is worth it.
Installation
Download the ISO from proxmox.com. Write it to USB with dd, Rufus, or Etcher.
Boot and Initial Screens
- Boot from USB
- Select “Install Proxmox VE”
- Accept EULA
Target Disk Selection
This is where most people make mistakes.
Single disk:
Target Harddisk: /dev/sdaFilesystem: zfs (RAID0) # Yes, RAID0 for single diskRAID0 on one disk sounds wrong, but it’s just “use this one disk with ZFS.”
Multiple disks for redundancy:
Filesystem: zfs (RAID1) # Mirror, needs 2+ disks# orFilesystem: zfs (RAIDZ-1) # Needs 3+ disksAdvanced options (click “Options” button):
ashift: 12 # Correct for most SSDs (4K sectors)compress: lz4 # Basically free compressionchecksum: on # Never turn this offcopies: 1 # 2 for paranoid, uses 2x spacehdsize: <leave blank or set limit>If you’re using NVMe, ashift=12 is still correct for most drives.
Network Configuration
Management Interface: eno1 (or whatever your NIC is)Hostname (FQDN): pve1.lab.localIP Address: 192.168.1.10Netmask: 255.255.255.0Gateway: 192.168.1.1DNS Server: 192.168.1.1Use a static IP. DHCP for a hypervisor is asking for trouble.
FQDN matters. Clustering uses hostnames. Get it right now or fix it painfully later.
Timezone and Password
Set your timezone. Set a strong root password. You’ll create non-root users later.
Installation Completes
Remove USB, reboot. Access web UI at https://<ip>:8006.
Post-Install: Repository Configuration
Default install points to the enterprise repository, which requires a subscription. You’ll see errors during updates. Fix this:
# Disable enterprise repositorymv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.disabled
# Add no-subscription repositoryecho "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Updateapt update && apt full-upgrade -yFor Ceph (if you’re using it):
# Same patternmv /etc/apt/sources.list.d/ceph.list /etc/apt/sources.list.d/ceph.list.disabledecho "deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription" > /etc/apt/sources.list.d/ceph-no-subscription.listVerify Installation
Check ZFS
zpool status# Should show your rpool, healthy
zfs list# Should show rpool and rpool/dataCheck VM Storage
pvesm status# Should show local, local-lvm (or local-zfs)Check Networking
ip a# Should show vmbr0 bridge with your management IPCheck Services
systemctl status pvedaemonsystemctl status pveproxyInitial Configuration via Web UI
Navigate to https://<ip>:8006, login as root.
Datacenter → Storage
You’ll see:
local: ISO images, container templates, backups (directory storage)local-lvmorlocal-zfs: VM disks (block storage)
This is fine to start. We’ll discuss storage architecture later.
Node → System → DNS
Verify DNS is correct. Add a search domain if needed.
Node → System → Time
Verify timezone. NTP is configured by default (systemd-timesyncd).
Subscription Nag
You’ll see a subscription popup on login. This is expected without a subscription. It’s just a nag, not a limitation.
To remove it (optional, slightly hacky):
# This modifies the JS file - breaks on updates, needs reapplysed -Ezi.bak "s/(Ext\.Msg\.show\(\{[^}]+title: gettext\('No valid sub)/void\(\{ \/\/ \1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.jssystemctl restart pveproxyI don’t bother. It’s one click to dismiss.
The Disk Layout I Actually Use
For a single-node homelab:
/dev/nvme0n1 (500GB NVMe) └── ZFS: rpool ├── rpool/ROOT/pve-1 # Proxmox OS └── rpool/data # VM disks
/dev/sda (2TB SATA SSD) - optional └── ZFS: tank └── tank/backups # Backup storageFor a production cluster:
/dev/nvme0n1 (256GB NVMe) └── ZFS: rpool # OS only, small and fast
/dev/sda, /dev/sdb (2x 1TB SSD) └── ZFS mirror: vmpool └── vmpool/data # VM disks with redundancy
/dev/sdc, /dev/sdd (2x 4TB HDD) └── ZFS mirror: backup └── backup/pbs # Proxmox Backup Server storageKey principle: Separate OS from VM storage. If your VM pool fills up, your host still boots.
Updates: The Boring Part That Matters
# Regular updatesapt update && apt full-upgrade
# Check for kernel updatespveversion -v
# Reboot if kernel updatedrebootBefore major version upgrades (7.x → 8.x):
- Read the official upgrade guide completely
- Backup everything
- Test on non-production first
- Run the upgrade checklist script
pve7to8 --full # For 7→8 upgrade, shows potential issuesWhat I Wish I Knew
ZFS RAM usage. ZFS wants RAM for ARC (adaptive replacement cache). Default is up to 50% of RAM. For a VM host, you might want to limit it:
# Limit ARC to 4GBecho "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.confupdate-initramfs -uEnterprise vs no-subscription repo. They’re nearly identical. Enterprise gets updates slightly earlier, that’s all. No-subscription is fine for production.
Clustering from the start. If you might cluster later, plan for it now. Same network segment, unique hostnames, Corosync-compatible setup.
Backups are separate. Proxmox creates VMs. Proxmox Backup Server (PBS) backs them up. They’re different products that work together. Plan your backup storage accordingly.
The Lesson
The most important thing isn’t ‘install’ — it’s laying the foundation for upgrades.
A Proxmox install takes 10 minutes. The choices you make during those 10 minutes affect the next 5 years. Wrong disk layout? Reinstall. Wrong hostname? Pain when clustering. Wrong storage? Juggling VMs later.
The boring install is the one that survives:
- ZFS for data integrity
- Static IP, proper FQDN
- Repository configured correctly
- Separate OS and VM storage
- Documentation of what you did
Proxmox will upgrade through multiple major versions if you don’t make weird choices at install time. That’s the goal: a hypervisor you forget is there because it just works.