一、部署准备:
准备5台机器(linux系统为centos7.6版本),当然也可以至少3台机器并充当部署节点和客户端,可以与ceph节点共用:
1台部署节点(配一块硬盘,运行ceph-depoly)
3台ceph节点(配两块硬盘,第一块为系统盘并运行mon,第二块作为osd数据盘)
1台客户端(可以使用ceph提供的文件系统,块存储,对象存储)
(1)所有ceph集群节点(包括客户端)设置静态域名解析;
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.253.135 controller 192.168.253.194 compute 192.168.253.15 storage 192.168.253.10 dlp
(2)所有集群节点(包括客户端)创建cent用户,并设置密码,后执行如下命令:
useradd cent && echo "123" | passwd --stdin cent echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph chmod 440 /etc/sudoers.d/ceph
(3)在部署节点切换为cent用户,设置无密钥登陆各节点包括客户端节点
su - cent
ceph@dlp15:17:01~#ssh-keygen ceph@dlp15:17:01~#ssh-copy-id dlp ceph@dlp15:17:01~#ssh-copy-id controller ceph@dlp15:17:01~#ssh-copy-id compute ceph@dlp15:17:01~#ssh-copy-id storage
(4)在部署节点切换为cent用户,在cent用户家目录,设置如下文件:vi config
然后设置如下权限:
Host dlp Hostname dlp User cent Host controller Hostname controller User cent Host compute Hostname compute User cent Host storage Hostname storage User cent
chmod 600 ./ssh/config
二、所有节点配置国内ceph源:
(1)所有节点下载阿里云的镜像源,并删除或者移动rdo-release-yunwei.repo去另一个目录下
wget https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/
2)接着yum clean all ,yum makecache缓存原数据。执行
wget http://download2.yunwei.edu/shell/ceph-j.tar.gz
(3)将下载好的rpm拷贝到所有节点,并安装。注意ceph-deploy-xxxxx.noarch.rpm 只有部署节点用到,其他节点不需要,部署节点也需要安装其余的rpm包
(4)在部署节点(cent用户下执行):安装 ceph-deploy,在root用户下,进入下载好的rpm包目录,执行:
yum -y localinstall ./*
创建ceph工作目录
mkdir ceph && cd ceph
(5)在部署节点(cent用户下ceph目录下):配置新集群
ceph-deploy
new
controller compute storage
vim conf
添加:osd_pool_
default
_size = 2
(6)在部署节点执行(ceph目录下):所有节点安装ceph软件
ceph-deploy install dlp controller compute storage
7) 初始化集群(ceph目录下)
ceph-deploy mon create-initial
如果报错1:
eph部署monitor时出现"monitor is not yet in quorum.
这是因为防火墙没关闭,去各个节点关闭所有防火墙
再执行:
ceph-deploy --overwrite-conf mon create-initial
报错2:
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite [ceph_deploy][ERROR ] GenericError: Failed to create 3 monitors 原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所有要推送消息 解决:ceph-deploy --overwrite-conf config push node1-4 或ceph-deploy --overwrite-conf mon create node1-4
列出节点磁盘:ceph-deploy disk list node1
擦净节点磁盘:ceph-deploy disk zap controller:/dev/sdb #擦除选择磁盘而不是分区
8) 给每个节点分区
fdisk /dev/sdb #自己注意是分哪个硬盘,注意w保存
9)准备osd(Object Storage Daemon:对象存储保护程序)
ceph-deploy osd prepare controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1
10)激活osd
ceph-deploy osd activate controller:/dev/sdb1 compute:/dev/sdb1 storage:/dev/sdc1
1 1) 在部署节点transfer config files
ceph-deploy admin dlp controller compute storage
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring(在其他节点上)
12 )在ceph集群中任意节点检测:
[root@controller old]# ceph -s cluster 8e03f0d7-06cb-49c6-b0fa-b9764e85e61a health HEALTH_OK monmap e1: 3 mons at {compute=192.168.253.194:6789/0,controller=192.168.253.135:6789/0,storage=192.168.253.15:6789/0} election epoch 6, quorum 0,1,2 storage,controller,compute osdmap e14: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v2230: 64 pgs, 1 pools, 0 bytes data, 0 objects 24995 MB used, 27186 MB / 52182 MB avail 64 active+clean
三、rbd块设备设置
创建rbd:rbd create disk01 --size 10G --image-feature layering 删除:rbd rm disk01
列示rbd:rbd ls -l
[cent@dlp ceph]$ rbd create disk01 --size 10G --image-feature layering [cent@dlp ceph]$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK disk01 10240M 2
映射rbd的image map:sudo rbd map disk01 取消映射:sudo rbd unmap disk01
[root@controller ~]# rbd map disk01 #在root目录下这么些,其他目录下要加sudo在前面 /dev/rbd0
将rbd0成功映射到了/dev/目录下,但是lsblk并无法查看到/dev/rbd0,这是因为还没有格式化和挂载。
显示map:rbd showmapped
格式化disk01文件系统xfs:sudo mkfs.xfs /dev/rbd0
[root@controller ~]# mkfs.xfs /dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=17, agsize=162816 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
挂载硬盘:sudo mount /dev/rbd0 /mnt
[root@controller ~]# mount /dev/rbd0 /mnt [root@controller ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─centos-root 253:0 0 17G 0 lvm / └─centos-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 20G 0 disk └─sdb1 8:17 0 20G 0 part /var/lib/ceph/osd/ceph-0 sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part /var/lib/ceph/osd/ceph-3 sr0 11:0 1 4.2G 0 rom /mnt rbd0 252:0 0 10G 0 disk /mnt
验证是否挂着成功:df -hT
停止ceph-mds服务:
systemctl stop ceph-mds@node1
ceph mds fail 0
列示存储池
ceph osd lspools
显示结果:0 rbd,
删除存储池
ceph osd pool rm rdb
--yes-i-really-really-mean-it
四、删除环境:
清空集群信息
ceph-deploy purge dlp node1 node2 node3 controller
ceph-deploy purgedata dlp node1 node2 node3 controller
忘记身份认证的密钥
ceph-deploy forgetkeys
rm -rf ceph*
来源:https://www.cnblogs.com/zzzynx/p/11010650.html