004. Use of CEPH block equipment foundation

I. Basic preparation

  • Refer to the document 002.Ceph installation and deployment to deploy a basic cluster;
  • Add a new node host name and IP in the deploy node to add resolution:
  1 [root@deploy ~]# echo "172.24.8.75 cephclient" >>/etc/hosts
  • Configure domestic yum source:
  1 [root@cephclient ~]# yum -y update
  2 [root@cephclient ~]# rm /etc/yum.repos.d/* -rf
  3 [root@cephclient ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  4 [root@cephclient ~]# yum -y install epel-release
  5 [root@cephclient ~]# mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
  6 [root@cephclient ~]# mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
  7 [root@cephclient ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 

Two pieces of equipment

2.1 add ordinary users

  1 [root@cephclient ~]# useradd -d /home/cephuser -m cephuser
  2 [root@cephclient ~]# echo "cephuser" | passwd --stdin cephuser	#cephclient node creates cephuser user
  3 [root@cephclient ~]# echo "cephuser ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/cephuser
  4 [root@cephclient ~]# chmod 0440 /etc/sudoers.d/cephuser
  5 [root@deploy ~]# su - manager
  6 [manager@deploy ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub cephuser@172.24.8.75
 

2.2 install CEPH client

  1 [root@deploy ~]# su - manager
  2 [manager@deploy ~]$ cd my-cluster/
  3 [manager@deploy my-cluster]$ vi ~/.ssh/config
  4 Host node1
  5    Hostname node1
  6    User cephuser
  7 Host node2
  8    Hostname node2
  9    User cephuser
 10 Host node3
 11    Hostname node3
 12    User cephuser
 13 Host cephclient
 14    Hostname cephclient						#New cephclient node information
 15    User cephuser
 16 [manager@deploy my-cluster]$ ceph-deploy install cephclient	#Install Ceph
 
Note: if the installation package cannot be downloaded when using CEPH deploy deployment, you can specify ceph.repo as the domestic source during deployment:
  1 ceph-deploy install cephclient --repo-url=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/ --gpg-url=https://mirrors.aliyun.com/ceph/keys/release.asc
  1 [manager@deploy my-cluster]$ ceph-deploy admin cephclient
Tip: in order to facilitate the management of cephclient in the later deployment node, simplify the output of relevant key in the CLI command, and copy the key to the corresponding node. The CEPH deploy tool will copy the key ring to the / etc/ceph directory. Make sure that the key ring file has read permission (such as sudo chmod +r /etc/ceph/ceph.client.admin.keyring).

2.3 creating a pool

  1 [manager@deploy my-cluster]$ ssh node1 sudo ceph osd pool create mytestpool 64

2.4 initialize pool

  1 [root@cephclient ~]# ceph osd lspools
  2 [root@cephclient ~]# rbd pool init mytestpool
 

2.5 create block device

  1 [root@cephclient ~]# rbd create mytestpool/mytestimages --size 4096 --image-feature layering

2.6 validation

  1 [root@cephclient ~]# rbd ls mytestpool
  2 mytestimages
  3 [root@cephclient ~]# rbd showmapped
  4 id pool       image        snap device
  5 0  mytestpool mytestimages -    /dev/rbd0
  6 [root@cephclient ~]# rbd info mytestpool/mytestimages
 

2.7 mapping image s to block devices

  1 [root@cephclient ~]# rbd map mytestpool/mytestimages --name client.admin
  2 /dev/rbd0
 

2.8 format device

  1 [root@cephclient ~]# mkfs.ext4 /dev/rbd/mytestpool/mytestimages
  2 [root@cephclient ~]# lsblk
 

2.9 mount and test

  1 [root@cephclient ~]# sudo mkdir /mnt/ceph-block-device
  2 [root@cephclient ~]# sudo mount /dev/rbd/mytestpool/mytestimages /mnt/ceph-block-device/
  3 [root@cephclient ~]# cd /mnt/ceph-block-device/
  4 [root@cephclient ceph-block-device]# echo 'This is my test file!' >> test.txt
  5 [root@cephclient ceph-block-device]# ls
  6 lost+found  test.txt
 

2.10 automatic map

  1 [root@cephclient ~]# vim /etc/ceph/rbdmap
  2 # RbdDevice     Parameters
  3 #poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
  4 mytestpool/mytestimages id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
 

2.11 power on and mount

  1 [root@cephclient ~]# vi /etc/fstab
  2 #......
  3 /dev/rbd/mytestpool/mytestimages    /mnt/ceph-block-device  ext4    defaults,noatime,_netdev    0 0
 

2.12 rbdmap startup

  1 [root@cephclient ~]# systemctl enable rbdmap.service
  2 [root@cephclient ~]# df -hT				#View validation
 
Tip: if it still fails to mount automatically after power on and rbdmap is abnormal, you can operate as follows:
  1 [root@cephclient ~]# vi /usr/lib/systemd/system/rbdmap.service
  2 [Unit]
  3 Description=Map RBD devices
  4 WantedBy=multi-user.target			#This line needs to be added
  5 #......

Tags: Linux Ceph yum EPEL ssh

Posted on Sat, 30 Nov 2019 11:58:24 -0800 by verN