Skip to main content

How to Setup Ceph on Proxmox and Perform Live VM Migration

In this video, we will cover how to perform live VM migration between Proxmox nodes in a cluster using Ceph as the distributed data storage.

Setting up a Ceph Storage Pool

Steps to create a Ceph cluster:

  1. Have dedicated storage for your Ceph pool (SSD, HDD). It CANNOT be the same disk you already use because it will be wiped.

Ensure you have an extra SSD or HDD installed for your Ceph pool.

Ensure dedicated storage device for Ceph pool
⚠️

Important

The following commands will need to be done on EVERY node (cluster size dependent).

Click your node, go to Ceph, and click "Install Ceph."

Installing Ceph on node

Use Reef (version 18.2). Make sure you install the latest version available at install time. Select the No-Subscription option because we do not pay for Proxmox.

Ceph Reef version install

On the next screen, press enter to confirm and wait for the install to complete.

Ceph installation progress

When the blue Next button is enabled, click Next.

Ceph install next button

Your install should automatically select the network matching that of your node. Confirm and click Next.

Ceph network settings

Create the Storage Pool

Click your node name, go to Disks, select your dedicated storage device, and click "Wipe Disk."

Wipe dedicated storage device

Return to the Ceph menu dropdown near the node name, select OSD, select the wiped device, and click Create.

Create OSD from wiped disk

Click Pools below the OSD menu, then click Create in the top left.

Set the pool size according to your number of nodes/OSD drives. For example, with two nodes and two drives, set pool size to two. Leave PG autoscaler mode on.

Create pool with size setting

How to Perform Live Migrations of VMs

To live migrate a VM, its storage must reside on the Ceph pool.

When creating a VM, on the Disks page select your Ceph pool as the storage option.

Selecting Ceph pool for VM disk
ℹ️

Note

You can edit existing VMs to move their storage to the Ceph pool to enable live migration.

Removing Local Disk for Live Migration

Live migration will NOT work if a CD/DVD device is attached. This device is only needed for initial OS installation.

Steps:

  1. Power off your VM.
  2. Select the CD/DVD device under Hardware.
  3. Click Remove.
  4. Power on the VM again.
Removing CD/DVD device from VM

Example of live migration failing with a local CD/DVD attached:

Live migration failure example

Example with CD/DVD removed:

Live migration possible after removing CD/DVD

Now, with the CD/DVD removed and VM running, select the target node to migrate to and click Migrate.

Migrate VM to target node

Live feed of the migration process:

Live migration progress feed

After migration, the VM runs on the new node with nearly zero downtime.

VM successfully migrated

You can now move all VMs away from a node to perform maintenance without downtime or service disruption.

Maintenance without downtime

Conclusion

This video covered:

  1. How to wipe drives.
  2. How to create a Ceph Pool.
  3. How to live migrate VMs.
  4. How to update other systems without disrupting user availability.

Follow Us on Social Media

YouTube
Discord
Patreon
Reddit
Rumble