虚拟机镜像:centos7 1908
1. 下载ceph nautilus 版本yum源
地址:https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/
下载这两个文件夹里对应 14.2.5-0.el7 的 rpm
noarch/ 14-Jan-2020 23:21
x86_64/ 14-Jan-2020 23:24
1.1 下载aarch64文件夹对应版本的rpm文件:(物理机)
]# mkdir /var/ftp/pub/ceph
]# cd /var/ftp/pub/ceph
ceph]# mkdir ceph noarch
ceph]# ls
ceph noarch
进入/var/ftp/pub/ceph/ceph文件夹,创建x86_64.txt
ceph]# vim x86_64.txt
注意:用鼠标全选复制网页:
"https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/"
上面所有的文字粘贴到x86_64.txt
如下图:
1.2 编写脚本:
ceph]# cat get.sh
#!/bin/bash
rpm_file=/var/ftp/pub/ceph/ceph/$1.txt
rpm_netaddr=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$1
for i in `cat $rpm_file`
do
if [[ $i =~ rpm ]] && [[ $i =~ 14.2.5-0 ]]
then
wget $rpm_netaddr/$i
fi
done
1.3 执行脚本,下载rpm文件:
ceph]# bash get.sh x86_64
查看:
ceph]# ls
ceph-14.2.5-0.el7.x86_64.rpm
ceph-base-14.2.5-0.el7.x86_64.rpm
ceph-common-14.2.5-0.el7.x86_64.rpm
ceph-debuginfo-14.2.5-0.el7.x86_64.rpm
cephfs-java-14.2.5-0.el7.x86_64.rpm
ceph-fuse-14.2.5-0.el7.x86_64.rpm
...
ceph]# mv get.sh x86_64.txt ../noarch/
ceph]# createrepo .
Spawning worker 0 with 11 pkgs
Spawning worker 1 with 11 pkgs
Spawning worker 2 with 10 pkgs
Spawning worker 3 with 10 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
1.4 如法炮制,下载noarch文件夹里面的 14.2.5的rpm文件放到/var/ftp/pub/ceph/noarch目录下
注意:
noarch/ 文件夹下,有些rpm文件并未显示全名,例如:
ceph-mgr-diskprediction-cloud-14.2.5-0.el7.noar..> 14-Jan-2020 23:18 85684
脚本下载不到,此时需要手动点击链接下载。
同时,需要手动下载:(别问为啥)
ceph-deploy-2.0.1-0.noarch.rpm
ceph-medic-1.0.4-16.g60cf7e9.el7.noarch.rpm
ceph-release-1-1.el7.noarch.rpm
1.6 将下载的rpm文件制作本地yum源,给虚拟机ceph集群使用
]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph repo
baseurl=ftp://192.168.4.1/pub/ceph/ceph
gpgcheck=0
enable=1
[ceph-noarch]
name=Ceph noarch packages
baseurl=ftp://192.168.4.1/pub/ceph/noarch
gpgcheck=0
enable=1
然后将此文件发到各个虚拟机即可。
2. 创建虚拟机,准备集群环境
2.1 创建虚拟机,修改主机名,启动网卡
192.168.4.10 client
192.168.4.11 node1 admin,osd, mon,mgr
192.168.4.12 node2 osd,mds
192.168.4.13 node3 osd,mds
192.168.4.14 node4 备用
2.2 配置四台机器互相远程无密码连接(包括自己)
]# ssh-keygen -f /root/.ssh/id_rsa -N ''
]# for i in 10 11 12 13 14
> do
> ssh-copy-id 192.168.4.$i
> done
2.3 修改/etc/hosts并同步到所有主机。
警告:/etc/hosts解析的域名必须与本机主机名一致!
]# vim /etc/hosts
... ...
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
192.168.4.14 node4
2.4 配置NTP时间同步
真实物理机创建NTP服务器
]# yum -y install chrony
]# vim /etc/chrony.conf
server ntp.aliyun.com iburst
allow 192.168.4.0/24
local stratum 10
]# systemctl restart chronyd
]# chronyc sources -v #出现*时间同步成功
...
^* 203.107.6.88...
其他所有节点与NTP服务器同步时间(以node1为例)。
]# vim /etc/chrony.conf
server 192.168.4.1 iburst
]# systemctl restart chronyd
]# chronyc sources -v #出现*时间同步成功
2.5 准备存储磁盘
物理机上为每个虚拟机准备3块磁盘
]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdb 252:16 0 20G 0 disk
vdc 252:32 0 20G 0 disk
vdd 252:48 0 20G 0 disk
3. 部署ceph集群
安装部署工具ceph-deploy
创建ceph集群
准备日志磁盘分区
创建OSD存储空间
查看ceph状态,验证
3.1 部署前安装:
node1:
安装pip:
]# yum -y install python3
]# wget --no-check-certificate https://pypi.python.org/packages/ff/d4/209f4939c49e31f5524fa0027bf1c8ec3107abaf7c61fdaad704a648c281/setuptools-21.0.0.tar.gz#md5=81964fdb89534118707742e6d1a1ddb4
]# wget --no-check-certificate https://pypi.python.org/packages/41/27/9a8d24e1b55bd8c85e4d022da2922cb206f183e2d18fee4e320c9547e751/pip-8.1.1.tar.gz#md5=6b86f11841e89c8241d689956ba99ed7
setuptools-21.0.0]# python setup.py install
pip-8.1.1]# python setup.py install
安装ceph-deploy
]# yum -y install ceph-deploy
]# ceph-deploy --version
2.0.1
创建目录
]# mkdir ceph-cluster
]# cd ceph-cluster/
所有节点:(注意,是所有节点)
]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
]# yum clean all;yum repolist
]# yum -y install yum-priorities
]# yum -y install epel-release
]# rm -rf /etc/yum.repos.d/epel.repo.rpmnew
]# yum -y install ceph-release
]# rm -rf /etc/yum.repos.d/ceph.repo.rpmnew
]# yum -y install ceph
报错:
错误:软件包:2:ceph-mgr-14.2.5-0.el7.x86_64 (ceph)
需要:python-werkzeug
您可以尝试添加 --skip-broken 选项来解决该问题
您可以尝试执行:rpm -Va --nofiles --nodigest
]# wget http://rpmfind.net/linux/mageia/distrib/6/x86_64/media/core/updates/python-werkzeug-0.11.3-1.1.mga6.noarch.rpm
]# yum -y install python-werkzeug-0.11.3-1.1.mga6.noarch.rpm
继续安装ceph;
]# yum -y install ceph
]# yum -y install ceph-radosgw
查看版本:
]# ceph --version
ceph version 14.2.5 (ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable)
3.2 创建Ceph集群配置 (node1操作)
cluster]# ceph-deploy new node1 node2 node3
给所有节点安装软件包。
cluster]# ceph-deploy install node1 node2 node3
...
[node3][INFO ] Running command: ceph --version
[node3][DEBUG ] ceph version 14.2.5 (ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable)
开始部署mon服务
cluster]# ceph-deploy mon create-initial
复制ceph.client.admin.keyring文件
cluster]# ceph-deploy admin node1 node2 node3
查看集群状态
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 42s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
3.3 创建OSD
cluster]# lsblk
vdb 253:16 0 20G 0 disk
vdc 253:32 0 20G 0 disk
vdd 253:48 0 20G 0 disk
准备磁盘分区(node1、node2、node3都做相同操作)
cluster]# parted /dev/vdb mklabel gpt
cluster]# parted /dev/vdb mkpart primary 1M 50%
cluster]# parted /dev/vdb mkpart primary 50% 100%
cluster]# chown ceph.ceph /dev/vdb1
cluster]# chown ceph.ceph /dev/vdb2
# 这两个分区用来做存储服务器的日志journal盘
cluster]# lsblk
vdb 253:16 0 20G 0 disk
├─vdb1 253:17 0 10G 0 part
└─vdb2 253:18 0 10G 0 part
vdc 253:32 0 20G 0 disk
vdd 253:48 0 20G 0 disk
cluster]# vim /etc/udev/rules.d/70-vdb.rules
ENV{DEVNAME}=="/dev/vdb1",OWNER="ceph",GROUP="ceph"
ENV{DEVNAME}=="/dev/vdb2",OWNER="ceph",GROUP="ceph"
初始化清空磁盘数据(仅node1操作即可)
cluster]# ceph-deploy disk zap node1 /dev/vd{c,d}
cluster]# ceph-deploy disk zap node2 /dev/vd{c,d}
cluster]# ceph-deploy disk zap node3 /dev/vd{c,d}
创建OSD存储空间(仅node1操作即可)
# 创建osd存储设备,vdc为集群提供存储空间,vdb1提供JOURNAL缓存
# 一个存储设备对应一个缓存设备,缓存需要SSD,不需要很大
cluster]# ceph-deploy osd create node1 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node1 --data /dev/vdd --journal /dev/vdb2
cluster]# ceph-deploy osd create node2 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node2 --data /dev/vdd --journal /dev/vdb2
cluster]# ceph-deploy osd create node3 --data /dev/vdc --journal /dev/vdb1
cluster]# ceph-deploy osd create node3 --data /dev/vdd --journal /dev/vdb2
验证测试:
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_WARN
no active mgr
services:
mon: 3 daemons, quorum node1,node2,node3 (age 5m)
mgr: no daemons active
osd: 6 osds: 6 up (since 7s), 6 in (since 7s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
报错,no active mgr
配置mgr
在node1上创建名为mgr1的mgr
cluster]# ceph-deploy mgr create node1:mgr1
报错消失:
cluster]# ceph -s
cluster:
id: 5e96cf02-b3c0-42b2-b357-d1186569d720
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 5m)
mgr: mgr1(active, since 6s)
osd: 6 osds: 6 up (since 56s), 6 in (since 56s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 108 GiB / 114 GiB avail
pgs:
4. 创建Ceph块存储
创建块存储镜像
客户端映射镜像
创建镜像快照
使用快照还原数据
使用快照克隆镜像
删除快照与镜像
4.1 创建镜像(node1)
查看存储池
]# ceph osd lspools
cluster]# ceph osd pool create pool-zk 100
pool 'pool-zk' created
指定池为块设备
cluster]# ceph osd pool application enable pool-zk rbd
重命名为pool为rbd
cluster]# ceph osd pool rename pool-zk rbd
创建镜像
cluster]# rbd create demo-image --image-feature layering --size 10G
cluster]# rbd create rbd/image --image-feature layering --size 10G
cluster]# rbd list
demo-image
image
查看镜像
cluster]# rbd info demo-image
rbd image 'demo-image':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 1111d288dfb3
block_name_prefix: rbd_data.1111d288dfb3
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:16 2020
access_timestamp: Mon Jan 20 00:25:16 2020
modify_timestamp: Mon Jan 20 00:25:16 2020
4.2 动态调整
缩小容量
cluster]# rbd resize --size 7G image --allow-shrink
Resizing image: 100% complete...done.
cluster]# rbd info image
rbd image 'image':
size 7 GiB in 1792 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 10ee3721a5af
block_name_prefix: rbd_data.10ee3721a5af
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:47 2020
access_timestamp: Mon Jan 20 00:25:47 2020
modify_timestamp: Mon Jan 20 00:25:47 2020
扩容容量
]# rbd resize --size 15G image
Resizing image: 100% complete...done.
cluster]# rbd info image
rbd image 'image':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 10ee3721a5af
block_name_prefix: rbd_data.10ee3721a5af
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:25:47 2020
access_timestamp: Mon Jan 20 00:25:47 2020
modify_timestamp: Mon Jan 20 00:25:47 2020
4.3 通过KRBD访问
集群内将镜像映射为本地磁盘
cluster]# rbd map demo-image
/dev/rbd0
]# lsblk
… …
rbd0 251:0 0 10G 0 disk
cluster]# mkfs.xfs /dev/rbd0
cluster]# mount /dev/rbd0 /mnt
cluster]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/rbd0 10G 33M 10G 1% /mnt
客户端通过KRBD访问(client)
#客户端需要安装ceph-common软件包
#拷贝配置文件(否则不知道集群在哪)
#拷贝连接密钥(否则无连接权限)
]# yum -y install ceph-common
]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
]# rbd map image
/dev/rbd0
]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 251:0 0 15G 0 disk
]# rbd showmapped
id pool image snap device
0 rbd image - /dev/rbd0
客户端格式化、挂载分区(client)
]# mkfs.xfs /dev/rbd0
]# mount /dev/rbd0 /mnt/
]# echo "test" > /mnt/test.txt
]# ls /mnt/
test.txt
4.4 创建镜像快照(node1)
查看镜像快照
cluster]# rbd snap ls image (无)
创建镜像快照
cluster]# rbd snap create image --snap image-snap1
查看
cluster]# rbd snap ls image
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 image-snap1 15 GiB Mon Jan 20 00:36:47 2020
###此时创建的快照里有test.txt############
删除客户端写入的测试文件
client ~]# rm -rf /mnt/test.txt
还原快照(一定要离线回滚)
client离线:
client ~]# umount /mnt/ # 不要在目录里操作
client ~]# rbd unmap image
node1回滚:
cluster]# rbd snap rollback image --snap image-snap1
Rolling back to snapshot: 100% complete...done.
client重新挂载:
client ~]# rbd map image
client ~]# mount /dev/rbd0 /mnt/
查看数据是否存在:
client ~]# ls /mnt/
test.txt
4.5 创建快照克隆(node1) image-clone
克隆快照
cluster]# rbd snap protect image --snap image-snap1
cluster]# rbd snap rm image --snap image-snap1 # 会失败
cluster]# rbd clone image --snap image-snap1 image-clone --image-feature layering
# 使用image的快照image-snap1克隆一个新的image-clone镜像
查看克隆镜像与父镜像快照的关系
cluster]# rbd info image-clone
rbd image 'image-clone':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 115f22e6cb86
block_name_prefix: rbd_data.115f22e6cb86
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:53:42 2020
access_timestamp: Mon Jan 20 00:53:42 2020
modify_timestamp: Mon Jan 20 00:53:42 2020
parent: rbd/image@image-snap1
overlap: 15 GiB
#克隆镜像很多数据都来自于快照链
#如果希望克隆镜像可以独立工作,就需要将父快照中的数据,全部拷贝一份,但比较耗时!
独立工作
cluster]# rbd flatten image-clone
cluster]# rbd info image-clone
rbd image 'image-clone':
size 15 GiB in 3840 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 115f22e6cb86
block_name_prefix: rbd_data.115f22e6cb86
format: 2
features: layering
op_features:
flags:
create_timestamp: Mon Jan 20 00:53:42 2020
access_timestamp: Mon Jan 20 00:53:42 2020
modify_timestamp: Mon Jan 20 00:53:42 2020
# 注意,父快照信息没了!
4.6 其他操作
客户端撤销磁盘映射(client)
]# umount /mnt
]# rbd showmapped
id pool image snap device
0 rbd image - /dev/rbd0
]# rbd unmap /dev/rbd0
]# rbd showmapped(无)
删除快照与镜像(node1)
cluster]# umount /mnt
]# rbd unmap /dev/rbd0
取消保护
cluster]# rbd snap unprotect image --snap image-snap1
删除快照
cluster]# rbd snap rm image --snap image-snap1
查看镜像
cluster]# rbd list
demo-image
image
image-clone
删除镜像
cluster]# rbd rm demo-image
cluster]# rbd rm image
cluster]# rbd rm image-clone
来源:oschina
链接:https://my.oschina.net/u/4274555/blog/4279600