Of course, the simplest way is using the command ceph osd tree
.
Note that, if an osd is down, you can see "last address" in ceph health detail
:
$ ceph health detail
...
osd.37 is down since epoch 16952, last address 172.16.4.68:6804/628
Also, you can use:
$ ceph osd find 37
{
"osd": 37,
"ip": "172.16.4.68:6804\/636",
"crush_location": {
"datacenter": "pa2.ssdr",
"host": "lxc-ceph-main-front-osd-03.ssdr",
"physical-host": "store-front-03.ssdr",
"rack": "pa2-104.ssdr",
"root": "ssdr"
}
}
To get partition UUID, you can use ceph osd dump
(see at the end of the line) :
$ ceph osd dump | grep ^osd.37
osd.37 down out weight 0 up_from 56847 up_thru 57230 down_at 57538 last_clean_interval [56640,56844) 172.16.4.72:6801/16852 172.17.2.37:6801/16852 172.17.2.37:6804/16852 172.16.4.72:6804/16852 exists d7ab9ac1-c68c-4594-b25e-48d3a7cfd182
$ ssh 172.17.2.37 | blkid | grep d7ab9ac1-c68c-4594-b25e-48d3a7cfd182
/dev/sdg1: UUID="98594f17-eae5-45f8-9e90-cd25a8f89442" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="d7ab9ac1-c68c-4594-b25e-48d3a7cfd182"
#(Depending on how the partitions are created, PARTUUID label is not necessarily present.)