User Tools

Site Tools


cephtest:testing_ceph

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cephtest:testing_ceph [2016/06/02 15:39]
moliver
cephtest:testing_ceph [2020/04/10 17:38] (current)
Line 39: Line 39:
 deb http://​repos.uclv.edu.cu/​ubuntu trusty-security multiverse deb http://​repos.uclv.edu.cu/​ubuntu trusty-security multiverse
 EoT EoT
 +
 +
 +# conectado a internet
 +cat >> /​etc/​apt/​sources.list << '​EoT'​
 +deb http://​download.ceph.com/​debian/​ jessie main
 +EoT
 +apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E84AC2C0460F3994
  
  
Line 98: Line 105:
 </​code>​ </​code>​
  
 +En el nodo admin
  
 +<​code>​
 +apt-get update
 +apt-get install ceph-deploy
  
 +#Esto se debe hacer desde el user ceph
  
-En el nodo admin+mkdir ceph-deploy 
 +cd ceph-deploy 
 +</​code>​ 
 + 
 +<​code>​ 
 +# ceph-deploy new mon1 mon2 mon3 
 +[ceph_deploy.cli][INFO ​ ] Invoked (1.4.0): /​usr/​bin/​ceph-deploy new mon1 mon2 mon3 
 +[ceph_deploy.new][DEBUG ] Creating new cluster named ceph 
 +[ceph_deploy.new][DEBUG ] Resolving host mon1 
 +[ceph_deploy.new][DEBUG ] Monitor mon1 at 10.12.1.154 
 +[ceph_deploy.new][INFO ​ ] making sure passwordless SSH succeeds 
 +[mon1][DEBUG ] connected to host: ceph-admin 
 +[mon1][INFO ​ ] Running command: ssh -CT -o BatchMode=yes mon1 
 +[ceph_deploy.new][DEBUG ] Resolving host mon2 
 +[ceph_deploy.new][DEBUG ] Monitor mon2 at 10.12.1.155 
 +[ceph_deploy.new][INFO ​ ] making sure passwordless SSH succeeds 
 +[mon2][DEBUG ] connected to host: ceph-admin 
 +[mon2][INFO ​ ] Running command: ssh -CT -o BatchMode=yes mon2 
 +[ceph_deploy.new][DEBUG ] Resolving host mon3 
 +[ceph_deploy.new][DEBUG ] Monitor mon3 at 10.12.1.156 
 +[ceph_deploy.new][INFO ​ ] making sure passwordless SSH succeeds 
 +[mon3][DEBUG ] connected to host: ceph-admin 
 +[mon3][INFO ​ ] Running command: ssh -CT -o BatchMode=yes mon3 
 +[ceph_deploy.new][DEBUG ] Monitor initial members are ['​mon1',​ '​mon2',​ '​mon3'​] 
 +[ceph_deploy.new][DEBUG ] Monitor addrs are ['​10.12.1.154',​ '​10.12.1.155',​ '​10.12.1.156'​] 
 +[ceph_deploy.new][DEBUG ] Creating a random mon key... 
 +[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... 
 +[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... 
 +</​code>​ 
 + 
 +Editar ​ ceph.conf 
 +<​code>​ 
 +[global] 
 +fsid = 531b4820-2257-4f3b-b12b-e1f6827ecce5 
 +mon_initial_members = mon1, mon2, mon3 
 +mon_host = 10.12.1.154,​10.12.1.155,​10.12.1.156 
 +auth_cluster_required = cephx 
 +auth_service_required = cephx 
 +auth_client_required = cephx 
 +filestore_xattr_use_omap = true 
 + 
 +public network = 10.12.1.0/​24 
 +cluster network = 10.12.253.0/​24 
 + 
 +#Choose reasonable numbers for number of replicas and placement groups. 
 +osd pool default size = 2 # Write an object 2 times 
 +osd pool default min size = 1 # Allow writing 1 copy in a degraded state 
 +osd pool default pg num = 64 
 +osd pool default pgp num = 64 
 + 
 +#Choose a reasonable crush leaf type 
 +#0 for a 1-node cluster. 
 +#1 for a multi node cluster in a single rack 
 +#2 for a multi node, multi chassis cluster with multiple hosts in a chassis 
 +#3 for a multi node cluster with hosts across racks, etc. 
 +osd crush chooseleaf type = 1 
 +</​code>​ 
 + 
 +Instalar cada server 
 +<​code>​ 
 +#​desde ​el user ceph 
 + 
 +ceph-deploy install ceph-admin mon1 mon2 mon3  
 +ceph-deplut install ceph1 ceph2 ceph3 
 +</​code>​ 
 + 
 + 
 +Si todo va ok ya se puede crear la conf inicial para los monitores. 
 +<​code>​ 
 +ceph-deploy mon create-initial 
 +</​code>​ 
 + 
 +La salida debe ir incrementando un file de configuracion poco a poco a medida que se pasa por todos los monitores. Se debe ver algo asi: 
 + 
 +<​code>​ 
 +[mon3][INFO ​ ] Running command: sudo initctl emit ceph-mon cluster=ceph id=mon3 
 +[mon3][INFO ​ ] Running command: sudo ceph --cluster=ceph --admin-daemon /​var/​run/​ceph/​ceph-mon.mon3.asok mon_status 
 +[mon3][DEBUG ] ******************************************************************************** 
 +[mon3][DEBUG ] status for monitor: mon.mon3 
 +[mon3][DEBUG ] { 
 +[mon3][DEBUG ]   "​election_epoch":​ 1, 
 +[mon3][DEBUG ]   "​extra_probe_peers":​ [ 
 +[mon3][DEBUG ]     "​10.12.1.154:​6789/​0",​ 
 +[mon3][DEBUG ]     "​10.12.1.155:​6789/​0"​ 
 +[mon3][DEBUG ]   ], 
 +[mon3][DEBUG ]   "​monmap":​ { 
 +[mon3][DEBUG ]     "​created":​ "​0.000000",​ 
 +[mon3][DEBUG ]     "​epoch":​ 0, 
 +[mon3][DEBUG ]     "​fsid":​ "​531b4820-2257-4f3b-b12b-e1f6827ecce5",​ 
 +[mon3][DEBUG ]     "​modified":​ "​0.000000",​ 
 +[mon3][DEBUG ]     "​mons":​ [ 
 +[mon3][DEBUG ]       { 
 +[mon3][DEBUG ]         "​addr":​ "​10.12.1.154:​6789/​0",​ 
 +[mon3][DEBUG ]         "​name":​ "​mon1",​ 
 +[mon3][DEBUG ]         "​rank":​ 0 
 +[mon3][DEBUG ]       }, 
 +[mon3][DEBUG ]       { 
 +[mon3][DEBUG ]         "​addr":​ "​10.12.1.155:​6789/​0",​ 
 +[mon3][DEBUG ]         "​name":​ "​mon2",​ 
 +[mon3][DEBUG ]         "​rank":​ 1 
 +[mon3][DEBUG ]       }, 
 +[mon3][DEBUG ]       { 
 +[mon3][DEBUG ]         "​addr":​ "​10.12.1.156:​6789/​0",​ 
 +[mon3][DEBUG ]         "​name":​ "​mon3",​ 
 +[mon3][DEBUG ]         "​rank":​ 2 
 +[mon3][DEBUG ]       } 
 +[mon3][DEBUG ]     ] 
 +[mon3][DEBUG ]   }, 
 +[mon3][DEBUG ]   "​name":​ "​mon3",​ 
 +[mon3][DEBUG ]   "​outside_quorum":​ [], 
 +[mon3][DEBUG ]   "​quorum":​ [], 
 +[mon3][DEBUG ]   "​rank":​ 2, 
 +[mon3][DEBUG ]   "​state":​ "​electing",​ 
 +[mon3][DEBUG ]   "​sync_provider":​ [] 
 +[mon3][DEBUG ] } 
 +[mon3][DEBUG ] ******************************************************************************** 
 +[mon3][INFO ​ ] monitor: mon.mon3 is running 
 +[mon3][INFO ​ ] Running command: sudo ceph --cluster=ceph --admin-daemon /​var/​run/​ceph/​ceph-mon.mon3.asok mon_status 
 +[ceph_deploy.mon][INFO ​ ] processing monitor mon.mon1 
 +[mon1][DEBUG ] connected to host: mon1 
 +[mon1][INFO ​ ] Running command: sudo ceph --cluster=ceph --admin-daemon /​var/​run/​ceph/​ceph-mon.mon1.asok mon_status 
 +[ceph_deploy.mon][INFO ​ ] mon.mon1 monitor has reached quorum! 
 +[ceph_deploy.mon][INFO ​ ] processing monitor mon.mon2 
 +[mon2][DEBUG ] connected to host: mon2 
 +[mon2][INFO ​ ] Running command: sudo ceph --cluster=ceph --admin-daemon /​var/​run/​ceph/​ceph-mon.mon2.asok mon_status 
 +[ceph_deploy.mon][INFO ​ ] mon.mon2 monitor has reached quorum! 
 +[ceph_deploy.mon][INFO ​ ] processing monitor mon.mon3 
 +[mon3][DEBUG ] connected to host: mon3 
 +[mon3][INFO ​ ] Running command: sudo ceph --cluster=ceph --admin-daemon /​var/​run/​ceph/​ceph-mon.mon3.asok mon_status 
 +[ceph_deploy.mon][INFO ​ ] mon.mon3 monitor has reached quorum! 
 +[ceph_deploy.mon][INFO ​ ] all initial monitors are running and have formed quorum 
 +[ceph_deploy.mon][INFO ​ ] Running gatherkeys... 
 +[ceph_deploy.gatherkeys][DEBUG ] Checking mon1 for /​etc/​ceph/​ceph.client.admin.keyring 
 +[mon1][DEBUG ] connected to host: mon1 
 +[mon1][DEBUG ] detect platform information from remote host 
 +[mon1][DEBUG ] detect machine type 
 +[mon1][DEBUG ] fetch remote file 
 +[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from mon1. 
 +[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring 
 +[ceph_deploy.gatherkeys][DEBUG ] Checking mon1 for /​var/​lib/​ceph/​bootstrap-osd/​ceph.keyring 
 +[mon1][DEBUG ] connected to host: mon1 
 +[mon1][DEBUG ] detect platform information from remote host 
 +[mon1][DEBUG ] detect machine type 
 +[mon1][DEBUG ] fetch remote file 
 +[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from mon1. 
 +[ceph_deploy.gatherkeys][DEBUG ] Checking mon1 for /​var/​lib/​ceph/​bootstrap-mds/​ceph.keyring 
 +[mon1][DEBUG ] connected to host: mon1 
 +[mon1][DEBUG ] detect platform information from remote host 
 +[mon1][DEBUG ] detect machine type 
 +[mon1][DEBUG ] fetch remote file 
 +[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from mon1. 
 + 
 +</​code>​ 
 + 
 +Proceso de creacion de los OSD 
 + 
 +Para ver los discos disponibles usar: 
 +<​code>​ 
 +ceph-deploy disk list ceph1 
 +</​code>​ 
 + 
 +Para crear un OSD en un disco o particion especifico. Se puede realizar un zap primero para limpiarlo si se desea. 
 +<​code>​ 
 +ceph-deploy disk zap ceph1:​sdc ​ ...... 
 + 
 +ceph-deploy osd create ceph1:sdc 
 +#para tener un journal 
 +ceph-deploy osd create ceph1:​sdc:/​dec/​sdb1 
 + 
 + 
 +</​code>​ 
 + 
 + 
 +Terminar con un deply de la configuracion 
 +<​code>​ 
 +ceph-deploy admin ceph-admin mon1 mon2 mon3 ceph1 ceph2 ceph3 
 +</​code>​ 
 + 
 +Es importante que todos los relojes esten sincronizados. Se recomienta usar ntp como dice en el articulo al final de esta entrada. 
 + 
 +Si todo fue bien deberia verse algo como esto: 
 +<​code>​ 
 +ceph@ceph-admin:​~/​cluster$ ceph health 
 +HEALTH_OK 
 +ceph@ceph-admin:​~/​cluster$ ceph -w 
 +    cluster 531b4820-2257-4f3b-b12b-e1f6827ecce5 
 +     ​health HEALTH_OK 
 +     ​monmap e1: 3 mons at {mon1=10.12.1.154:​6789/​0,​mon2=10.12.1.155:​6789/​0,​mon3=10.12.1.156:​6789/​0},​ election epoch 8, quorum 0,1,2 mon1,​mon2,​mon3 
 +     ​osdmap e13: 3 osds: 3 up, 3 in 
 +      pgmap v23: 192 pgs, 3 pools, 0 bytes data, 0 objects 
 +            104 MB used, 82795 MB / 82900 MB avail 
 +                 192 active+clean 
 +2016-06-02 12:​57:​14.610325 mon.0 [INF] osdmap e13: 3 osds: 3 up, 3 in 
 + 
 +ceph@ceph-admin:​~/​cluster$ ceph osd tree 
 +# id    weight ​ type name       ​up/​down reweight 
 +-1      0.09    root default 
 +-2      0.03            host ceph1 
 +0       ​0.03 ​                   osd.0   ​up ​     1 
 +-3      0.03            host ceph2 
 +1       ​0.03 ​                   osd.1   ​up ​     1 
 +-4      0.03            host ceph3 
 +2       ​0.03 ​                   osd.2   ​up ​     1 
 + 
 + 
 + 
 +</​code>​ 
 + 
 +Para usar un bloque 
 + 
 +<​code>​ 
 +ceph@ceph-admin:​~/​cluster$ rbd create mirepo --size 20480 
 + 
 +ceph@ceph-admin:​~/​cluster$ rbd  ls 
 +mirepo 
 + 
 +ceph@ceph-admin:​~/​cluster$ rbd --image mirepo info 
 +rbd image '​mirepo':​ 
 +        size 20480 MB in 5120 objects 
 +        order 22 (4096 kB objects) 
 +        block_name_prefix:​ rb.0.102d.2ae8944a 
 +        format: 1 
 + 
 + 
 +</​code>​ 
 + 
 + 
 + 
 +Luego en la maquina donde se desea mapear ese bloque creado: 
 +<​code>​ 
 +root@net-test:​~#​ 
 +root@net-test:​~#​ modprobe rbd 
 +root@net-test:​~#​ 
 +root@net-test:​~#​ echo "​10.12.1.154,​10.12.1.155,​10.12.1.156 name=admin,​secret=AQDBYlBX8CiSHRAAVGYor8pmE2oGIQk7YO3Tig== rbd mirepo" ​ > /​sys/​bus/​rbd/​add 
 +10.12.1.154,​10.12.1.155,​10.12.1.156 name=admin,​secret=AQDBYlBX8CiSHRAAVGYor8pmE2oGIQk7YO3Tig== rbd mirepo /​sys/​bus/​rbd/​add 
 + 
 +root@net-test:​~#​ ll /dev/rbd* 
 +brw-rw---- 1 root disk 254, 0 Jun  2 13:18 /dev/rbd0 
 + 
 +/dev/rbd: 
 +total 0 
 +drwxr-xr-x ​ 3 root root   60 Jun  2 13:18 . 
 +drwxr-xr-x 18 root root 3020 Jun  2 13:18 .. 
 +drwxr-xr-x ​ 2 root root   60 Jun  2 13:18 rbd 
 +root@net-test:​~#​ 
 + 
 + 
 +</​code>​ 
 + 
 +el secret se busca en un monitor en el archivo : 
 +<​code>​ 
 +root@mon1:​~#​ cat /​etc/​ceph/​ceph.client.admin.keyring 
 +[client.admin
 +        key = AQDBYlBX8CiSHRAAVGYor8pmE2oGIQk7YO3Tig== 
 +root@mon1:​~#​ 
 +</​code>​ 
 + 
 + 
 + 
 +Tomado de : 
 + 
 +http://​www.virtualtothecore.com/​en/​adventures-with-ceph-storage-part-5-install-ceph-in-the-lab/​
  
 +[[addrmosd|Breve guia para Quitar/​Adicionar un OSD]]
  
  
 +[[disk4pool|Asignando solo un grupo de discos a un pool]]
  
cephtest/testing_ceph.1464881977.txt.gz · Last modified: 2020/04/10 17:38 (external edit)