Ceph Monitoring – CLI ve REST API

Ceph monitoring icin 3 sey kullanabiliriz ;

  1. cli
  2. api
  3. 3th party gui

 Cluster saglik durumu ;

root@cephadm:~# ceph health

HEALTH_WARN clock skew detected on mon.cephmon2; Monitor clock skew detected

Mesela yukarda komutu bize bisiylerin yolunda olmadigini gosteriyor. Ayni komutu “ceph health detail” seklinde verdiginizde daha fazla detaya sahip olacaksiniz

root@cephadm:~# ceph health detail

HEALTH_WARN clock skew detected on mon.cephmon2; Monitor clock skew detected

mon.cephmon2 addr 10.111.21.181:6789/0 clock skew 0.142183s > max 0.05s (latency 0.000532976s)

API icin ; 

Once ceph-rest-api uygulamasini calistirin , ceph-rest-api icin flask kullanilmis ve calistirdiginizda builtin WSGI devreye giriyor istekleriniz alip islemek icin -n de client name / adini belirtiyor , kullanacaginiz degeri ls -al /etc/ceph/ diyip ilgili keyring dosyasinin adindan edinebilirsiniz.

root@cephadm:~# ceph-rest-api -n client.admin

* Running on http://0.0.0.0:5000/

212.58.13.17 – – [12/Jan/2016 17:41:07] “GET /api/v0.1/health HTTP/1.1” 200 –

Tum Event’leri gormek “tail -f” ile dosyadan akarcasina

root@cephadm:~# ceph -w

Yukardaki komut Info/Warning/Error larin hepsini gercek zamanli olarak gosterir. Isterseniz cesitli parametreleri vererek ozellestirebilirsiniz , bunlar –watch-debug veya –watch-info veya –watch-sec/warn/error diyerekte izleyebilirsiniz

Kullanim Oranini Gormek Ister isek 

root@cephadm:~# ceph df

GLOBAL:

SIZE       AVAIL      RAW USED     %RAW USED

22505M     22436M       71172k          0.31

POOLS:

NAME     ID     USED     %USED     MAX AVAIL     OBJECTS

rbd      0         0         0        16827M           0

Simdi birde cluster durumunu gorelim

root@cephadm:~# ceph status

cluster 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

health HEALTH_WARN

clock skew detected on mon.cephmon2

Monitor clock skew detected

monmap e2: 2 mons at {cephmon1=10.111.21.180:6789/0,cephmon2=10.111.21.181:6789/0}

election epoch 4, quorum 0,1 cephmon1,cephmon2

osdmap e28: 4 osds: 3 up, 2 in

flags sortbitwise

pgmap v8929: 64 pgs, 1 pools, 0 bytes data, 0 objects

71172 kB used, 22436 MB / 22505 MB avail

64 active+clean

Ceph key bazli bir dogrulama kullanir , ayni zamanda bence bir cesit ACL yapisinada sahiptir

root@cephadm:~# ceph auth list

installed auth entries:

osd.1

key: AQBa0INWGvOMLRAAgtWEd/qiIaNAUUW2jwS6Kw==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.2

key: AQDc0YNWBGrBFxAAhoSauH6fDizQ1XH0xHq78w==

caps: [mon] allow profile osd

caps: [osd] allow *

osd.3

key: AQAB94NW5BzGERAAW7jiH1+21w23aGOq/q/4Ag==

caps: [mon] allow profile osd

caps: [osd] allow *

client.admin

key: AQDFvYNWrL20BhAA4jbMbET9iaDJUtsYpUqJdg==

caps: [mds] allow *

caps: [mon] allow *

caps: [osd] allow *

client.bootstrap-mds

key: AQDFvYNW8yOtKRAALXVHmZyuODglRAqumADxig==

caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

key: AQDFvYNW6uhaEhAAp5ORmKB83b7bLNXz8HNVxw==

caps: [mon] allow profile bootstrap-osd

client.bootstrap-rgw

key: AQDFvYNWd/wBHhAAuGWzPZF6YsBALePDUtIb4g==

caps: [mon] allow profile bootstrap-rgw

Ceph Cluster durumunu genel olarak gordugunuz gibi her bir bileseni ayri ayrida gorabilirsiniz, mesela ; 

Bilesenler MON / OSD / PG diyelim , her birini ayri ayri gorebilirsiniz , ek olarak dump ciktisi ile daha fazlasinida.

root@cephadm:~# ceph mon stat

e2: 2 mons at {cephmon1=10.111.21.180:6789/0,cephmon2=10.111.21.181:6789/0}, election epoch 4, quorum 0,1 cephmon1,cephmon2

root@cephadm:~# ceph osd stat

osdmap e28: 4 osds: 3 up, 2 in

flags sortbitwise

root@cephadm:~# ceph osd dump

epoch 28

fsid 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

created 2015-12-30 13:19:32.847082

modified 2016-01-02 23:30:33.964193

flags sortbitwise

pool 0 ‘rbd’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

max_osd 4

osd.0 down out weight 0 up_from 0 up_thru 0 down_at 0 last_clean_interval [0,0) :/0 :/0 :/0 :/0 exists,new dd052c0f-5cf4-44ca-8b49-ba38e643f5cc

osd.1 up   in  weight 1 up_from 5 up_thru 26 down_at 0 last_clean_interval [0,0) 10.111.21.185:6800/1728 10.111.21.185:6801/1728 10.111.21.185:6802/1728 10.111.21.185:6803/1728 exists,up 587abe90-6400-4132-9f14-b1eebbc9ed29

osd.2 up   in  weight 1 up_from 9 up_thru 26 down_at 0 last_clean_interval [0,0) 10.111.21.186:6800/1694 10.111.21.186:6801/1694 10.111.21.186:6802/1694 10.111.21.186:6803/1694 exists,up 4749ea96-8306-4350-af52-47a6ffbdd6c4

osd.3 up   out weight 0 up_from 13 up_thru 20 down_at 0 last_clean_interval [0,0) 10.111.21.185:6804/3623 10.111.21.185:6805/3623 10.111.21.185:6806/3623 10.111.21.185:6807/3623 exists,up c019a786-77fb-4835-89dd-67bdf9432f2f

root@cephadm:~# ceph pg dump

Quorum ah bu quorum

Her yerde var , ceph derki quorum ceph mon lar tayin eder, mutlaka MON nodlarinin %51 ayakta olmalidir , bana gore odd number issue

root@cephadm:~# ceph quorum_status

{“election_epoch“:4,”quorum”:[0,1],”quorum_names”:[“cephmon1″,”cephmon2″],”quorum_leader_name“:”cephmon1″,”monmap”:{“epoch”:2,”fsid”:”554bf5c1-d2b3-4dca-89db-b4b654e0bc35″,”modified”:”2015-12-31 11:51:52.959280″,”created”:”0.000000″,”mons”:[{“rank”:0,”name”:”cephmon1″,”addr”:”10.111.21.180:6789\/0″},{“rank”:1,”name”:”cephmon2″,”addr”:”10.111.21.181:6789\/0″}]}}

OSD durumu icin 10 numara komut

root@cephadm:~# ceph osd tree

ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 

-1 0.03209 root default                                        

-2 0.02139     host cephosd1                                   

 1 0.01070         osd.1          up  1.00000          1.00000 

 3 0.01070         osd.3          up        0          1.00000 

-3 0.01070     host cephosd2                                   

 2 0.01070         osd.2          up  1.00000          1.00000 

 0       0 osd.0                down        0          1.00000

CRUSH hakkinda

Tum CRUSH map gormek icin 

root@cephadm:~# ceph osd crush dump

Tum Rule Set icin 

root@cephadm:~# ceph osd crush rule list

[

“replicated_ruleset”

]

CRUSH rule lari gormek icin “rule dump” alabilir

Bu arada production’da elimizde deli gibi OSD olacagindan istersek numarasinda OSD arattirabiliriz

root@cephadm:~# ceph osd find 2

{

“osd”: 2,

“ip”: “10.111.21.186:6800\/1694”,

“crush_location”: {

“host”: “cephosd2”,

“root”: “default”

}

}

PG(Placement Guruplar) Onlar iyi ise Ceph Clusterda iyidir

root@cephadm:~# ceph pg stat

v8978: 64 pgs: 64 active+clean; 0 bytes data, 71180 kB used, 22436 MB / 22505 MB avail

Cok farkli durum belirtecleri var ; Peering/active/clean/degregated/recovering/backfilling/remapped/stale

Ayrinti Ayrinti

root@cephadm:~# ceph pg 0.2d query

Bu arada her OSD 0.5 saniyede bir durum raporunu iletiyor MON node lara

GUI Tabanli Uygulamalar

  • Kraken
  • Ceph-dash tool
  • Calamari

ceph-desh tool- ubuntu kurulumu

Git kurulu ise bu adimi gecin

root@cephadm:~# apt-get install git

simdi bulundugunuz yerde biz klasor acin ve ilgili projeyi github’dan cekin

root@cephadm:~# mkdir ceph-dash

root@cephadm:~# git clone https://github.com/Crapworks/ceph-dash.git

root@cephadm:~# apt-get install python-pip

root@cephadm:~# easy_install Jinja2

root@cephadm:~# cd ceph-dash/

root@cephadm:~/ceph-dash# ./ceph-dash.py

Screen Shot 2016-01-12 at 23.58.34

Posted on 12/01/2016, in ceph and tagged , , , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: