CephNotes

Some notes about Ceph
Laurent Barbe @Adelius / INRAE

Deep scrub distribution

To verify the integrity of data, Ceph uses a mechanism called deep scrubbing which browse all your data once per week for each placement group. This can be the cause of overload when all osd running deep scrubbing at the same time.

You can easly see if a deep scrub …

Ceph OSD : Where is my data ?

The purpose is to verify where my data is stored on the Ceph cluster.

For this, I have just create a minimal cluster with 3 osd :

$ ceph-deploy osd create ceph-01:/dev/sdb ceph-02:/dev/sdb ceph-03:/dev/sdb

Where is my osd directory on ceph-01 ?

$ mount | grep ceph
/dev/sdb1 …

Samba shadow_copy and Ceph RBD

I add script to create snapshot on rbd for use with samba shadow_copy2. For more details go on https://github.com/ksperis/autosnap-rbd-shadow-copy

How to use :

Before you need to have ceph cluster running and samba installed.

Verify admin access to the ceph cluster : (should not return error)

$ rbd ls …

Test ceph persistant rbd device

Create persistant rbd device

Create block device and map it with /etc/ceph/rbdmap

$ rbd create rbd/myrbd --size=1024
$ echo "rbd/myrbd" >> /etc/ceph/rbdmap
$ service rbdmap reload
[ ok ] Starting RBD Mapping: rbd/myrbd.
[ ok ] Mounting all filesystems...done.

View rbd mapped :

$ rbd showmapped
id pool image snap device …

Don't forget unmap before remove rbd

$ rbd rm rbd/myrbd
Removing image: 99% complete...failed.2013-08-02 14:07:17.530470 7f3ba2692760 -1 librbd: error removing header: (16) Device or resource busy
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it …