Ceph Komutlar – 1

Kurulum sonrasi bizim icin komutlar cok onemli olacak, bu makalede ;

  • ceph version gorme
  • ceph cluster durmunu gorme
  • ceph disk pool listeleyelim
  • ceph cluster kullanimini gorme
  • ceph node uzerinde diskleri gorme
  • ceph cluster’a osd ekleme
  • ceph cluster osd listeleme
  • ceph cluster’a mon ekleme
  • ceph cluster monitor node state/quarom bilgisini gorme
  • ceph mon/osd/pg/crash/mds dump ile cluster map leri ayri ayri gorme
  • ceph recovery isleminin izleme
  • ceph osd devre disi birakma

 

ceph Version gorelim

root@cephadm:~# ceph -v

ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)

Ceph Health’e Bakalim

root@cephadm:~# ceph health 

HEALTH_WARN clock skew detected on mon.cephmon2; Monitor clock skew detected

Ceph Health Biraz daha Detay Gorelim 

root@cephadm:~# ceph health detail

HEALTH_WARN clock skew detected on mon.cephmon2; Monitor clock skew detected

mon.cephmon2 addr 10.111.21.181:6789/0 clock skew 0.142086s > max 0.05s (latency 0.000618286s)

Ceph Status’e Bakalim

1 tane monitor node var , iki osd mevcut , osdler calisabilir (up) ve iceride (in)

root@cephadm:~# ceph status

cluster 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

health HEALTH_OK

monmap e1: 1 mons at {cephmon1=10.111.21.180:6789/0}

election epoch 2, quorum 0 cephmon1

osdmap e10: 3 osds: 2 up, 2 in

flags sortbitwise

pgmap v17: 64 pgs, 1 pools, 0 bytes data, 0 objects

68280 kB used, 22439 MB / 22505 MB avail

64 active+clean

Ceph obejeleri poollarda tutar , ilgili poollar bir listeleyelim

root@cephadm:~# rados lspools 

rbd

Not : Ilk kurulumda “rbd” adli varsayili bir pool yaratiliyor.

Ceph pool icersinde objeleri listelemek icin

root@cephadm:~# rados -p rbd ls

Cluster kullanimini gorelim 

root@cephadm:~# rados df

pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB

rbd                        0            0            0            0            0            0            0            0            0

total used          102808            0

total avail       34466348

total space       34569156

OSD Node uzerinde Diskleri gormek 

root@cephadm:~/cephclusterconfigfiles# ceph-deploy disk list cephosd1

cephosd1][DEBUG ] /dev/sda :

[cephosd1][DEBUG ]  /dev/sda2 other

[cephosd1][DEBUG ]  /dev/sda5 other, LVM2_member

[cephosd1][DEBUG ]  /dev/sda1 other, ext2, mounted on /boot

[cephosd1][DEBUG ] /dev/sdb :

[cephosd1][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1

[cephosd1][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb2

[cephosd1][DEBUG ] /dev/sr0 other, unknown

Ceph kurulum dokumaninda ceph-deploy prepare ve activate ile OSD eklemistik simdi ikisini birden tek komutta yapacagiz

Once bir disk ekledim ve sistemi kapatmadan onu tanimak istedim (Ceph nodelar sanal, VMware uzerinde “Add Disk” yapiyorum)

root@cephosd1:~# ls /sys/class/scsi_host/ | while read host ; do echo “- – -” > /sys/class/scsi_host/$host/scan ; done

Diyelim ekledigim disk uzerindede partiton vardi , onularida direkt ucurmak istedim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy disk zap cephosd1:/dev/sdc

Simdi hem ilgili disk OSD icin prepare ve active edelim, varsayili olarak iki partition olacak biri journal digeri ise data diski ve xfs formatlanacak

root@cephadm:~/cephclusterconfigfiles# ceph-deploy osd create cephosd1:/dev/sdc

ceph’in durumuna baktigimizda yeni diskin up ve in oldugunu goreceksiniz

root@cephadm:~/cephclusterconfigfiles# ceph status

cluster 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

health HEALTH_OK

monmap e1: 1 mons at {cephmon1=10.111.21.180:6789/0}

election epoch 2, quorum 0 cephmon1

osdmap e15: 4 osds: 3 up, 3 in

flags sortbitwise

pgmap v25: 64 pgs, 1 pools, 0 bytes data, 0 objects

101 MB used, 33657 MB / 33758 MB avail

64 active+clean

Ayni sekilde “cephosd1” uzerinde iki adet osd deamon’in calistigini gorecegiz

ceph      1728     1  0 14:38 ?        00:00:04 /usr/bin/ceph-osd –cluster=ceph -i 1 -f –setuser ceph –setgroup ceph

ceph      3623     1  0 17:23 ?        00:00:00 /usr/bin/ceph-osd –cluster=ceph -i 3 -f –setuser ceph –setgroup ceph

OSD leri gorelim 

root@cephosd1:~# ceph osd ls

0

1

2

3

OSD leri agac seklinde ne nerede diye gorelim 

root@cephosd1:~# ceph osd tree

ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03209 root default

-2 0.02139     host cephosd1

1 0.01070         osd.1          up  1.00000          1.00000 

3 0.01070         osd.3          up  1.00000          1.00000 

-3 0.01070     host cephosd2

2 0.01070         osd.2          up  1.00000          1.00000 

0       0 osd.0                down        0          1.00000

Ikinci bir MON node ekleyelim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy mon add cephmon2

Hopaaaaa

Monitor Node Status/Quorum Bilgilerini Gorme

Asinda bircok komut var mesela ;

root@cephmon2:~# ceph mon stat

e2: 2 mons at {cephmon1=10.111.21.180:6789/0,cephmon2=10.111.21.181:6789/0}, election epoch 4, quorum 0,1 cephmon1,cephmon2

Veya quorum bilgisini gormek istiyorsunuz . Asagidaki ciktiya gore Rank 0 yani cephmon1 ki kendisi ilk baslangic mon node oluyorlar.

root@cephmon2:~# ceph mon_status | python -m json.tool

{

“election_epoch”: 4,

“extra_probe_peers”: [

“10.111.21.181:6789/0”

],

“monmap”: {

“created”: “0.000000”,

“epoch”: 2,

“fsid”: “554bf5c1-d2b3-4dca-89db-b4b654e0bc35”,

“modified”: “2015-12-31 11:51:52.959280”,

“mons”: [

{

“addr”: “10.111.21.180:6789/0”,

“name”: “cephmon1”,

“rank”: 0

},

{

“addr”: “10.111.21.181:6789/0”,

“name”: “cephmon2”,

“rank”: 1

}

]

},

“name”: “cephmon1”,

“outside_quorum”: [],

“quorum”: [

0,

1

],

    “rank”: 0,

    “state”: “leader”,

“sync_provider”: []

}

Ceph mon/osd/pg/crash/mds dump

Asagida ornek olarak mon ve osd var , bunlari pg  /osd crash veya mds yazarak cogaltabilirsiniz. Amacimiz buyuk resmi gormek.

root@cephadm:~# ceph mon dump

dumped monmap epoch 2

epoch 2

fsid 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

last_changed 2015-12-31 11:51:52.959280

created 0.000000

0: 10.111.21.180:6789/0 mon.cephmon1

1: 10.111.21.181:6789/0 mon.cephmon2

root@cephadm:~# ceph osd dump

epoch 15

fsid 554bf5c1-d2b3-4dca-89db-b4b654e0bc35

created 2015-12-30 13:19:32.847082

modified 2015-12-30 17:23:49.250136

flags sortbitwise

pool 0 ‘rbd’ replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

max_osd 4

osd.0 down out weight 0 up_from 0 up_thru 0 down_at 0 last_clean_interval [0,0) :/0 :/0 :/0 :/0 exists,new dd052c0f-5cf4-44ca-8b49-ba38e643f5cc

osd.1 up   in  weight 1 up_from 5 up_thru 12 down_at 0 last_clean_interval [0,0) 10.111.21.185:6800/1728 10.111.21.185:6801/1728 10.111.21.185:6802/1728 10.111.21.185:6803/1728 exists,up 587abe90-6400-4132-9f14-b1eebbc9ed29

osd.2 up   in  weight 1 up_from 9 up_thru 14 down_at 0 last_clean_interval [0,0) 10.111.21.186:6800/1694 10.111.21.186:6801/1694 10.111.21.186:6802/1694 10.111.21.186:6803/1694 exists,up 4749ea96-8306-4350-af52-47a6ffbdd6c4

osd.3 up   in  weight 1 up_from 13 up_thru 14 down_at 0 last_clean_interval [0,0) 10.111.21.185:6804/3623 10.111.21.185:6805/3623 10.111.21.185:6806/3623 10.111.21.185:6807/3623 exists,up c019a786-77fb-4835-89dd-67bdf9432f2f

Ceph Recvoery Islemi Izleme

root@cephmon2:~# ceph -w

Yukardaki komut sanki tail -f yapiyormusunuz gibi tum olusan eventleri listeleyecek

OSD Devre disi birakma

root@cephmon2:~# ceph osd out osd.3

ceph osd crush tunables optimal

Posted on 14/01/2016, in ceph and tagged . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: