ceph 接入OpenStack

偶尔善良 提交于 2019-12-01 23:05:21

创建对应的pool:

ceph osd pool create volumes 512
ceph osd pool create images 512
ceph osd pool create vms 512

安装各个节点需要的组件:

在Controller节点安装ceph管理接口:

sudo apt-get install python-ceph libvirt-bin

在Glance节点安装python-rbd:

sudo apt-get install python-rbd 

在Cinder-volume节点和Nova-compute节点安装ceph-common:

sudo apt-get install ceph-common

复制Ceph配置文件到各个节点

Glance节点、Cinder-volume节点和Nova-compute节点:

ssh (your-openstack-server-node) sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

创建CephX认证授权用户:

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

复制秘钥

把密钥环复制到Glance节点、Cinder-volume节点、Nova-compute节点以及Cinder-Backup节点并且授权:

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
#还得把 client.cinder 用户的密钥存进 libvirt,libvirt 进程从 Cinder 挂载块设备时要用它访问集群
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

配置Glance节点

在glance-api配置文件中修改以下内容:

[DEFAULT]
default_store = rbd
show_image_direct_url = True
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user =  glance
ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

配置cinder-volume节点

添加以下内容:

[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

配置nova-compute节点

创建secret.xml并且把密钥注入到Libvirt里面:

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
rm client.cinder.key secret.xml

编辑nova.conf文件

添加以下内容:

[libvirt]
images_type= rbd
images_rbd_pool= vms
images_rbd_ceph_conf= /etc/ceph/ceph.conf
rbd_user= cinder
rbd_secret_uuid= 457eb676-33da-42ec-9a8c-9293d545c337
inject_password = false
inject_key = false
inject_partition = -2
block_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_PERSIST_DEST
live_migration_bandwidth = 0
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED, VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_PERSIST_DEST
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
live_migration_uri = qemu+tcp://%s/system
hw_disk_discard = unmap
disk_cachemodes = "network=writeback"
cpu_mode = host-passthrough

重启服务

sudo service glance-api restart
sudo service nova-compute restart
sudo service cinder-volume restart
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!