CephNotes

Some notes about Ceph
Laurent Barbe @SIB

Use trim/discard with rbd kernel client (since kernel 3.18)

Realtime :

mount -o discard /dev/rbd0 /mnt/myrbd

Using batch :

fstrim /mnt/myrbd

Test

The empty FS :

$ rbd create rbd/myrbd --size=20480
$ mkfs.xfs /dev/rbd0
$ rbd diff rbd/myrbd | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'
14.4062 MB

With a big file... :

$ mount /dev/rbd0 …

placement_pools on Rados-GW

The purpose of this test is to map a RadosGw Bucket to a specific Ceph pool. For exemple, if using a fast pool with ssd and a low pool for archive...

   standard_bucket datas  --> .rgw.buckets        (default pool)
   specific_bucket datas  --> .rgw.buckets.custom

First, we create a pool .rgw.buckets.custom …

Ceph RBD with LXC containers

{% blockquote LXC 2.0.0 First Support for Ceph RBD : http://cephnotes.ksperis.com/blog/2016/04/14/lxc-2-dot-0-0-first-support-for-ceph-rbd %} Update on Apr 14th, 2016:

A simple way to secure your data with containers is to use a distributed storage such as Ceph for LXC root storage.

For exemple :

# lxc-create -n …

OpenNebula 4.8 with Ceph Support on Debian Wheezy

A quick howto to install OpenNebula 4.8 with support for Ceph on Debian Wheezy.

$ onedatastore show cephds
DATASTORE 101 INFORMATION                                                       
ID             : 101                 
NAME           : cephds              
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTER        : -                   
TYPE           : IMAGE               
DS_MAD         : ceph                
TM_MAD         : ceph                
BASE PATH      : /var/lib/one//datastores/101
DISK_TYPE      : RBD        

OpenNebula Installation

OpenNebula Frontend

Install …

Remove pool without name

For exemple :

# rados lspools
data
metadata
rbd
                            <---- ?????
.eu.rgw.root
.eu-west-1.domain.rgw
.eu-west-1.rgw.root
.eu-west-1.rgw.control
.eu-west-1.rgw.gc
.eu-west-1.rgw.buckets.index
.eu-west-1.rgw.buckets
.eu-west-1.log


# ceph osd dump | grep "pool 4 "
pool 4 '' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 …