Ceph Kurulumu

Hizlica ceph kurulumu …

Elimizde son guncellemeleri gecilmis Ubuntu 14.04.3 LTS Trusty isletim sistemi kurulmus linux sunucular mevcut, sunucu listesi ve gorevleri asagidaki gibi belirtilmistir.

  • admin node (10.111.21.184)
  • monitor node 1 (10.111.21.180)
  • monitor node 2 (10.111.21.181)
  • monitor node 3
  • osd node 1 (10.111.21.185)
  • osd node 2 (10.111.21.186)
  • osd node 3

Ilgili release key ekleyelim

root@cephadm:~# wget -q -O- ‘https://download.ceph.com/keys/release.asc’ | sudo apt-key add –

OK

Ceph paketini repoya ekleyelim

root@cephadm:~# echo deb http://download.ceph.com/debian-infernalis/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

deb http://download.ceph.com/debian-infernalis/ trusty main

Repoyu guncelleyip “ceph-deploy” kuralim

root@cephadm:~# sudo apt-get update && sudo apt-get install ceph-deploy

Kuruluma baslamadan once ;

  • admin node sifresiz diger nodelara girebilmeli cunku tum konfigrasyon islemlerini merkezi olarak yonetecegiz
  • ceph-deploy yine diger nodelarda sifresiz ve sudo hakki olan bir kullanici ile login olmali (uygulama kuracak)
  • “ceph-deploy –username” parametresi ile root dahil yarattiginiz  kullanici adini belirtebilirsiniz
  • ntp tum nodelarda kurulu ve konfigre edilmis olmali, tum nodelar ayni saat degerine sahip olmali
  • openssh-server paketi tum nodelarda kurulu olmali
  • SELinux kapali olmali
  • iptables kurulum engellememsi icin kapali

Ben kurulumumda “root” kullanicisini kullanacagim , onerilmiyor, productionda bu sekilde kullanmayin diyorlar , keza ceph adli bir kullanicida sistemde yaratmayin cunku hackerlar bunu bidiklerinden brute force icin kullanmaya baslasmislar.

SSH Key Yaratalim

root@cephadm:~# ssh-keygen

Dikkat ! “passphrase” bos gecin , sifre atamayin

Key Transfer

Once monitor node 1 icin yapiyorum

root@cephadm:~# ssh-copy-id root@10.111.21.180

The authenticity of host ‘10.111.21.180 (10.111.21.180)’ can’t be established.

ECDSA key fingerprint is 2e:b7:48:94:bb:69:44:ac:3f:c0:e7:f4:8e:a6:17:c0.

Are you sure you want to continue connecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys

root@10.111.21.180’s password:

Number of key(s) added: 1

Now try logging into the machine, with:   “ssh ‘root@10.111.21.180′”

and check to make sure that only the key(s) you wanted were added.

Dikkat ! ayni islemi 10.111.21.181,10.111.21.185,10.111.21.186 makinelerindede yapin

Her Node Uzerindeki /etc/hosts dosyasini guncelleyin, benim icin asagidaki ayarlar gecerli , hostname uzerinden birbirlerini pingleyebilmelisiniz

# Ceph MON Nodes

10.111.21.180   cephmon1

10.111.21.181   cephmon2

# Ceph OSD Nodes

10.111.21.185   cephosd1

10.111.21.186   cephosd2

# Ceph Admin Node

10.111.21.184   cephadm

Admin Node uzerinde bir dizin yaratin ve “ceph-deploy” buradan tetikleyin

root@cephadm:~# mkdir cephclusterconfigfiles

root@cephadm:~# cd cephclusterconfigfiles/

Kuruluma baslamadan once oldu ya birsikinti yasadiniz ve tum islemleri sifirlamak istiyorsunuz bu durumda “ceph deploy purgedata nodex nodey” ve “ceph deploy forgetkeys” komutunu kullanin

Ilk ceph baslangic konfig ve keyring dosyasini yaratmak gerekiyor

root@cephadm:~/cephclusterconfigfiles# ceph-deploy –username root new cephmon1

veya (kullanici belirtmek istemezseniz)

root@cephadm:~/cephclusterconfigfiles# ceph-deploy new cephmon1

Iki OSD ile sistemi ayaga kaldiracagiz , onun icin ilgili asagidaki satiri [global] altina ekleyin

osd pool default size = 2

Not : Varsayili deger 3

Kuruluma baslayalim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy install cephmon1 cephmon2 cephosd1 cephosd2 cephadm

Tum ceph binaryleri ilgili nodelara kurulacak

Ilk monitor node’u yaratalim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy mon create-initial

OSD nodelara birer tane 16Gb lik disk ekledim , isletim sistemi bunu /dev/sdb olarak gordu

OSD node ve diskleri hazirlayalim

Asagidaki ornek ikinci node icin , tum node larda bunu yapmalisiniz

root@cephadm:~/cephclusterconfigfiles# ceph-deploy osd prepare cephosd2:/dev/sdb

Bu islemi yaptiginizde goreceksinizki ceph-deploy ilgili node uzerinde guzel guzel partition yaratmis ve onu xfs olarak formatlamis noatime parametresinide vermis olarak.

Asaidaki gibi mount ciktilarina sahip olmus olacaksiniz

/dev/sdb1 on /var/lib/ceph/osd/ceph-2 type xfs (rw,noatime,inode64)

Islemlerde asagidaki gibi “ready” mesaji alip almadiniginiza dikkat edin.

[ceph_deploy.osd][DEBUG ] Host cephosd1 is now ready for osd use.

Simdi OSD leri aktive edelim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy osd activate cephosd2:/dev/sdb1

Burada dikkat /dev/sdb degil /dev/sdb1 olarak verdim aktive ederken

Admin node uzerindeki config sync edelim

root@cephadm:~/cephclusterconfigfiles# ceph-deploy admin cephadm cephmon1 cephosd1 cephosd2

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cephadm

[cephadm][DEBUG ] connected to host: cephadm

[cephadm][DEBUG ] detect platform information from remote host

[cephadm][DEBUG ] detect machine type

[cephadm][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cephmon1

[cephmon1][DEBUG ] connected to host: cephmon1

[cephmon1][DEBUG ] detect platform information from remote host

[cephmon1][DEBUG ] detect machine type

[cephmon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cephosd1

[cephosd1][DEBUG ] connected to host: cephosd1

[cephosd1][DEBUG ] detect platform information from remote host

[cephosd1][DEBUG ] detect machine type

[cephosd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cephosd2

[cephosd2][DEBUG ] connected to host: cephosd2

[cephosd2][DEBUG ] detect platform information from remote host

[cephosd2][DEBUG ] detect machine type

[cephosd2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

ve

root@cephadm:~# ceph health

HEALTH_OK

Bu islemlerden sonra MON ve OSD makinelerinde “ps -fe” yaparak calisan ceph deamon larini gormeye baslayabilirsiniz

ceph      3829     1  0 13:19 ?        00:00:00 /usr/bin/ceph-mon –cluster=ceph -i cephmon1 -f –setuser ceph –setgroup ceph

ceph      1728     1  0 14:38 ?        00:00:01 /usr/bin/ceph-osd –cluster=ceph -i 1 -f –setuser ceph –setgroup ceph

ceph      1694     1  0 14:45 ?        00:00:01 /usr/bin/ceph-osd –cluster=ceph -i 2 -f –setuser ceph –setgroup ceph

Nodelardaki configler “/etc/ceph” altinda bulabilirsiniz

Bu kurulum aslinda komutlarla oynamak icindi , production’da calistirmak icin degil.

Posted on 30/12/2015, in ceph and tagged , , , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: