-
Notifications
You must be signed in to change notification settings - Fork 34
Setup local ceph cluster
mas-who edited this page Mar 22, 2024
·
2 revisions
- Create a new project with all settings to default. Choose a good name like "ceph-cluster-demo".
- Inside your new project, create a storage pool to be used by the ceph cluster.
- Set then name for the storage pool to
ceph-cluster-pool
- Select
ZFS
for the driver option - Set storage pool size to 100GiB
- Set then name for the storage pool to
- Create 3 custom storage volumes inside the storage pool you just created.
- Set the name of the volume to
remote1
- Set the volume size to 20GiB
- Set the content type to block
- Accept default for all other settings
- Create the volume and repeat steps i. to iii. to create two more custom storage volumes
remote2
andremote3
- Set the name of the volume to
- Create a managed network for communication between the ceph cluster nodes
- Set the network type to
Bridge
- Set the network name to
ceph-network
- Accept default for all other settings
- Set the network type to
- Create 3 LXD VM instances to host the ceph cluster, for each instance:
- Set the instance name to
ceph-node-[instance-number]
i.e.ceph-node-1
,ceph-node-2
,ceph-node-3
- Select the
Ubuntu 22.04LTS jammy
base image for the instance (non-minimal) - Set the instance type to
VM
- Inside the Disk devices advanced settings tab
- Choose
ceph-cluster-pool
to be the rool storage for the instance. Leave Size input as empty. - Attach a disk device. Select a custom storage volume from
ceph-cluster-pool
that you just created. e.g.remote1
volume forceph-node-1
instance
- Choose
- Inside the Network devices advanced settings tab, create a network device.
- Set the Network to
ceph-network
- Set the device name to
eth0
- Set the Network to
- Inside the Resource limits advanced settings tab.
- Set the Exposed CPU limit to
2
- Set memory limit to
2GB
- Set the Exposed CPU limit to
- Create the instance without starting it. Repeat steps 1 to 5.
- Set the instance name to
- Start all 3 instances created in step 5.
- Inside each VM instance, install
microceph
for deploying the ceph cluster later. For each instance:- Start a terminal session for that instance
- Inside the terminal, enter
snap install microceph
and wait for the installation to complete
- Setup the ceph cluster with the following steps, take care to enter commands within the correct instance terminal sessions:
- Inside the terminal session for
ceph-node-1
instance. Entermicroceph init
.- Accept default for the listening address on
ceph-node-1
- Enter
yes
to create a new ceph cluster - Accept default for the system name i.e.
ceph-node-1
- Enter
yes
to add additional servers to the ceph cluster - Enter
ceph-node-2
for the name of the additional server. This will return a token, take note of that, you will be using this token later for setting up the ceph cluster on instanceceph-node-2
. - Enter
yes
again to add another server. - Enter
ceph-node-3
for the name of the additional server. This will return a token, take note of that, you will be using this token later for setting up the ceph cluster on instanceceph-node-3
. - Press enter without any value input to carry on the setup process.
- Accept default to add a local disk. This will result in the following terminal output:
Available unpartitioned disks on this system: +---------------+----------+------+--------------------------------------------------------+ | MODEL | CAPACITY | TYPE | PATH | +---------------+----------+------+--------------------------------------------------------+ | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 | +---------------+----------+------+--------------------------------------------------------+
- Enter the detected PATH of the disk from the output of the above step and confirm.
- Accept default to not wipe the local disk as we just created the storage volume in step 3.
- Accept default to not encrypt the local disk.
- Press enter without any value input to complete the ceph setup on
ceph-node-1
- Accept default for the listening address on
- Inside the terminal session for
ceph-node-1
instance. Entermicroceph init
.- Accept default for the listening address on
ceph-node-2
- Accept default to not create a new cluster
- Get the token generated from step 8.1.5 and paste into the terminal to add
ceph-node-2
to the ceph cluster - Accept default to add a local disk. This will result in the following terminal output:
Available unpartitioned disks on this system: +---------------+----------+------+--------------------------------------------------------+ | MODEL | CAPACITY | TYPE | PATH | +---------------+----------+------+--------------------------------------------------------+ | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 | +---------------+----------+------+--------------------------------------------------------+
- Enter the detected PATH of the disk from the output of the above step and confirm.
- Accept default to not wipe the local disk as we just created the storage volume in step 3.
- Accept default to not encrypt the local disk.
- Press enter without any value input to complete the ceph setup on
ceph-node-2
- Accept default for the listening address on
- Inside the terminal session for
ceph-node-3
instance. Entermicroceph init
.- Accept default for the listening address on
ceph-node-3
- Accept default to not create a new cluster
- Get the token generated from step 8.1.7 and paste into the terminal to add
ceph-node-3
to the ceph cluster - Accept default to add a local disk. This will result in the following terminal output:
Available unpartitioned disks on this system: +---------------+----------+------+--------------------------------------------------------+ | MODEL | CAPACITY | TYPE | PATH | +---------------+----------+------+--------------------------------------------------------+ | QEMU HARDDISK | 20.00GiB | scsi | /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_lxd_volume--1 | +---------------+----------+------+--------------------------------------------------------+
- Enter the detected PATH of the disk from the output of the above step and confirm.
- Accept default to not wipe the local disk as we just created the storage volume in step 3.
- Accept default to not encrypt the local disk.
- Press enter without any value input to complete the ceph setup on
ceph-node-3
- Accept default for the listening address on
- Inside the terminal session for
- Confirm that the ceph cluster is now up and running. In the terminal session for
ceph-node-1
, entermicroceph.ceph status
. This should display output that looks like that shown below:
root@ceph-node-1:~# microceph.ceph status
cluster:
id: ea999901-40d2-4cf8-9e8c-b3ae2db8cba4
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node-1,ceph-node-2,ceph-node-3 (age 99s)
mgr: ceph-node-1(active, since 49m), standbys: ceph-node-2, ceph-node-3
osd: 3 osds: 3 up (since 31s), 3 in (since 33s)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 577 KiB
usage: 66 MiB used, 60 GiB / 60 GiB avail
pgs: 1 active+clean
- Now that the ceph cluster is up and running, we just need to integrate it with the host LXD server that is running on your machine.
- In the terminal session for
ceph-node-1
, run the following command to show the content for theceph.conf
file:The output should look similar to that shown below:cat /var/snap/microceph/current/conf/ceph.conf
root@ceph-node-1:~# cat /var/snap/microceph/current/conf/ceph.conf # # Generated by MicroCeph, DO NOT EDIT. [global] run dir = /var/snap/microceph/793/run fsid = ea999901-40d2-4cf8-9e8c-b3ae2db8cba4 mon host = 10.73.14.211,10.73.14.54,10.73.14.30 auth allow insecure global id reclaim = false public addr = 10.73.14.211 ms bind ipv4 = true ms bind ipv6 = false [client]
- On your host machine, create the
/etc/ceph
direcotry and inside of it create theceph.conf
file. Copy the terminal output generated from the previous step and paste it into this file. - In the terminal session for
ceph-node-1
, run the following command to show the content for theceph.client.admin.keyring
file:The output should look similar to that shown below:cat /var/snap/microceph/current/conf/ceph.client.admin.keyring
root@ceph-node-1:~# cat /var/snap/microceph/current/conf/ceph.client.admin.keyring [client.admin] key = AQD7EKVlHhzUFRAAncX4LpBlu8iICPIiXqTQ/g== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
- On your host machine, create the
ceph.client.admin.keyring
file inside/etc/ceph
. Copy the terminal output generated from the previous step and paste it into this file. - Give lxd permission to the
/etc/ceph
directory by running the following command:sudo chgrp lxd -R /etc/ceph
- In the terminal session for
- Lastly confirm that you can create a storage pool in the ceph cluster with the CLI
lxc storage create [pool-name] ceph
, or alternatively use the UI to create a storage pool with theceph
driver type.