CephNotes

Some notes about Ceph
Laurent Barbe @Adelius / INRAE

Multiple clusters on the same hardware: OSD isolation with LXC

Ceph makes it easy to create multiple cluster on the same hardware with the naming of clusters. If you want a better insolation you can use LXC, for example to allow a different version of Ceph between your clusters.

Ceph LXC

For this you will need access to the physical disks from …

Replace Apache by Civetweb on the RadosGW

Since Firefly you can test the use of the lightweight web client Civetweb instead of Apache. To activate it, it's very simple, there's nothing to install again, simply add this line to your ceph.conf:

[client.radosgw.gateway]
rgw frontends = "civetweb port=80"
...

If you have already installed apache, remember …

Difference between 'ceph osd reweight' and 'ceph osd crush reweight'

From Gregory and Craig in mailing list...

Gregory Farnum http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html %} "ceph osd crush reweight" sets the CRUSH weight of the OSD. This weight is an arbitrary value (generally the size of the disk in TB or something) and controls how much …

Use trim/discard with rbd kernel client (since kernel 3.18)

Realtime :

mount -o discard /dev/rbd0 /mnt/myrbd

Using batch :

fstrim /mnt/myrbd

Test

The empty FS :

$ rbd create rbd/myrbd --size=20480
$ mkfs.xfs /dev/rbd0
$ rbd diff rbd/myrbd | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'
14.4062 MB

With a big file... :

$ mount /dev/rbd0 …

placement_pools on Rados-GW

The purpose of this test is to map a RadosGw Bucket to a specific Ceph pool. For exemple, if using a fast pool with ssd and a low pool for archive...

   standard_bucket datas  --> .rgw.buckets        (default pool)
   specific_bucket datas  --> .rgw.buckets.custom

First, we create a pool .rgw.buckets.custom …