CephNotes

Some notes about Ceph
Laurent Barbe @SIB

get the number of placement groups per osd

Get the PG distribution per osd in command line :

pool :  0   1   2   3   | SUM 
------------------------------------------------
osd.10  6   6   6   84  | 102
osd.11  7   6   6   76  | 95
osd.12  4   4   3   56  | 67
osd.20  5   5   5   107 | 122
osd.13  3   3   3   73  | 82 …

CRUSHMAP : Example of a hierarchical cluster map

It is not always easy to know how to organize your data in the Crushmap, especially when trying to distribute the data geographically while separating different types of discs, eg SATA, SAS and SSD. Let's see what we can imagine as Crushmap hierarchy.

Take a simple example of a distribution …

Multiple clusters on the same hardware: OSD isolation with LXC

Ceph makes it easy to create multiple cluster on the same hardware with the naming of clusters. If you want a better insolation you can use LXC, for example to allow a different version of Ceph between your clusters.

Ceph LXC

For this you will need access to the physical disks from …

Replace Apache by Civetweb on the RadosGW

Since Firefly you can test the use of the lightweight web client Civetweb instead of Apache. To activate it, it's very simple, there's nothing to install again, simply add this line to your ceph.conf:

[client.radosgw.gateway]
rgw frontends = "civetweb port=80"
...

If you have already installed apache, remember …

Difference between 'ceph osd reweight' and 'ceph osd crush reweight'

From Gregory and Craig in mailing list...

{% blockquote Gregory Farnum http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html %} "ceph osd crush reweight" sets the CRUSH weight of the OSD. This weight is an arbitrary value (generally the size of the disk in TB or something) and controls how …