Why I Chose Proxmox (and How to Install It the Boring, Correct Way)

I’ve run ESXi, Hyper-V, oVirt, and plain KVM with libvirt. They all work. But when Broadcom acquired VMware and started the licensing chaos, I moved everything to Proxmox. Not because it’s trendy — because it’s boring in the best way.

Proxmox is Debian with a web UI and good defaults. When things break (and they will), you’re debugging Linux, not a proprietary hypervisor. The skills transfer. The logs make sense. The community has seen your problem before.

This is how to install Proxmox in a way that doesn’t create pain later.

Why Proxmox

It’s just Debian. Under the web UI, it’s apt, systemd, and standard Linux networking. When the UI doesn’t do what you need, drop to the shell.

ZFS first-class. Built-in ZFS support with proper integration. Snapshots, replication, compression — all accessible from the UI.

No licensing games. The “enterprise” repository requires a subscription, but the no-subscription repository works fine. You’re not crippled without paying.

Clustering is free. Three nodes, shared storage, HA — no extra licenses.

Both VMs and containers. KVM for full VMs, LXC for lightweight containers. Same management interface.

Before You Install

Hardware Considerations

CPU: Intel or AMD with virtualization extensions (VT-x/AMD-V). Check BIOS — these are sometimes disabled by default.

RAM: Minimum 8GB for the host, but realistically 32GB+ for anything useful. ECC recommended for ZFS, not required.

Storage:

  • Boot drive: SSD, 32GB minimum (128GB comfortable)
  • VM storage: Separate drive(s), SSD strongly preferred
  • ZFS: Wants multiple drives for redundancy

Network: Dedicated NIC for management, additional for VM traffic if you’re serious.

The Decision: ZFS vs LVM vs ext4

This is the first fork in the road. Choose wrong and you’ll reinstall later.

ZFS (my choice for most cases):

  • Built-in checksums, catches silent corruption
  • Snapshots are instant and cheap
  • Compression saves space with minimal CPU overhead
  • Replication to another node is trivial
  • Requires more RAM (1GB per TB of storage, roughly)
  • Single-disk ZFS works fine, just no redundancy

LVM-thin:

  • Less RAM overhead
  • Snapshots work but less elegant
  • No checksums
  • Familiar if you know LVM
  • Good choice for simple setups or low-RAM systems

ext4/XFS on raw disk:

  • Simplest
  • No snapshots without external tools
  • Fine for the boot drive if VMs live elsewhere

My recommendation: ZFS unless you have less than 16GB RAM or specific reasons not to. The data integrity alone is worth it.

Installation

Download the ISO from proxmox.com. Write it to USB with dd, Rufus, or Etcher.

Boot and Initial Screens

  1. Boot from USB
  2. Select “Install Proxmox VE”
  3. Accept EULA

Target Disk Selection

This is where most people make mistakes.

Single disk:

Target Harddisk: /dev/sda
Filesystem: zfs (RAID0) # Yes, RAID0 for single disk

RAID0 on one disk sounds wrong, but it’s just “use this one disk with ZFS.”

Multiple disks for redundancy:

Filesystem: zfs (RAID1) # Mirror, needs 2+ disks
# or
Filesystem: zfs (RAIDZ-1) # Needs 3+ disks

Advanced options (click “Options” button):

ashift: 12 # Correct for most SSDs (4K sectors)
compress: lz4 # Basically free compression
checksum: on # Never turn this off
copies: 1 # 2 for paranoid, uses 2x space
hdsize: <leave blank or set limit>

If you’re using NVMe, ashift=12 is still correct for most drives.

Network Configuration

Management Interface: eno1 (or whatever your NIC is)
Hostname (FQDN): pve1.lab.local
IP Address: 192.168.1.10
Netmask: 255.255.255.0
Gateway: 192.168.1.1
DNS Server: 192.168.1.1

Use a static IP. DHCP for a hypervisor is asking for trouble.

FQDN matters. Clustering uses hostnames. Get it right now or fix it painfully later.

Timezone and Password

Set your timezone. Set a strong root password. You’ll create non-root users later.

Installation Completes

Remove USB, reboot. Access web UI at https://<ip>:8006.

Post-Install: Repository Configuration

Default install points to the enterprise repository, which requires a subscription. You’ll see errors during updates. Fix this:

Terminal window
# Disable enterprise repository
mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.disabled
# Add no-subscription repository
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list
# Update
apt update && apt full-upgrade -y

For Ceph (if you’re using it):

Terminal window
# Same pattern
mv /etc/apt/sources.list.d/ceph.list /etc/apt/sources.list.d/ceph.list.disabled
echo "deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription" > /etc/apt/sources.list.d/ceph-no-subscription.list

Verify Installation

Check ZFS

Terminal window
zpool status
# Should show your rpool, healthy
zfs list
# Should show rpool and rpool/data

Check VM Storage

Terminal window
pvesm status
# Should show local, local-lvm (or local-zfs)

Check Networking

Terminal window
ip a
# Should show vmbr0 bridge with your management IP

Check Services

Terminal window
systemctl status pvedaemon
systemctl status pveproxy

Initial Configuration via Web UI

Navigate to https://<ip>:8006, login as root.

Datacenter → Storage

You’ll see:

  • local: ISO images, container templates, backups (directory storage)
  • local-lvm or local-zfs: VM disks (block storage)

This is fine to start. We’ll discuss storage architecture later.

Node → System → DNS

Verify DNS is correct. Add a search domain if needed.

Node → System → Time

Verify timezone. NTP is configured by default (systemd-timesyncd).

Subscription Nag

You’ll see a subscription popup on login. This is expected without a subscription. It’s just a nag, not a limitation.

To remove it (optional, slightly hacky):

Terminal window
# This modifies the JS file - breaks on updates, needs reapply
sed -Ezi.bak "s/(Ext\.Msg\.show\(\{[^}]+title: gettext\('No valid sub)/void\(\{ \/\/ \1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
systemctl restart pveproxy

I don’t bother. It’s one click to dismiss.

The Disk Layout I Actually Use

For a single-node homelab:

/dev/nvme0n1 (500GB NVMe)
└── ZFS: rpool
├── rpool/ROOT/pve-1 # Proxmox OS
└── rpool/data # VM disks
/dev/sda (2TB SATA SSD) - optional
└── ZFS: tank
└── tank/backups # Backup storage

For a production cluster:

/dev/nvme0n1 (256GB NVMe)
└── ZFS: rpool # OS only, small and fast
/dev/sda, /dev/sdb (2x 1TB SSD)
└── ZFS mirror: vmpool
└── vmpool/data # VM disks with redundancy
/dev/sdc, /dev/sdd (2x 4TB HDD)
└── ZFS mirror: backup
└── backup/pbs # Proxmox Backup Server storage

Key principle: Separate OS from VM storage. If your VM pool fills up, your host still boots.

Updates: The Boring Part That Matters

Terminal window
# Regular updates
apt update && apt full-upgrade
# Check for kernel updates
pveversion -v
# Reboot if kernel updated
reboot

Before major version upgrades (7.x → 8.x):

  1. Read the official upgrade guide completely
  2. Backup everything
  3. Test on non-production first
  4. Run the upgrade checklist script
Terminal window
pve7to8 --full # For 7→8 upgrade, shows potential issues

What I Wish I Knew

ZFS RAM usage. ZFS wants RAM for ARC (adaptive replacement cache). Default is up to 50% of RAM. For a VM host, you might want to limit it:

Terminal window
# Limit ARC to 4GB
echo "options zfs zfs_arc_max=4294967296" > /etc/modprobe.d/zfs.conf
update-initramfs -u

Enterprise vs no-subscription repo. They’re nearly identical. Enterprise gets updates slightly earlier, that’s all. No-subscription is fine for production.

Clustering from the start. If you might cluster later, plan for it now. Same network segment, unique hostnames, Corosync-compatible setup.

Backups are separate. Proxmox creates VMs. Proxmox Backup Server (PBS) backs them up. They’re different products that work together. Plan your backup storage accordingly.

The Lesson

The most important thing isn’t ‘install’ — it’s laying the foundation for upgrades.

A Proxmox install takes 10 minutes. The choices you make during those 10 minutes affect the next 5 years. Wrong disk layout? Reinstall. Wrong hostname? Pain when clustering. Wrong storage? Juggling VMs later.

The boring install is the one that survives:

  • ZFS for data integrity
  • Static IP, proper FQDN
  • Repository configured correctly
  • Separate OS and VM storage
  • Documentation of what you did

Proxmox will upgrade through multiple major versions if you don’t make weird choices at install time. That’s the goal: a hypervisor you forget is there because it just works.