Ceph is one of the most interesting distributed storage systems available, with a veryactive development and a complete set of features that make it a valuable candidate for cloud storage services. This tutorial goes through the required steps (and some related troubleshooting), required to setup a Ceph cluster and access it with a simple client usinglibrados. Please refer to the Ceph documentation for detailed insights on Ceph components.
(Part 2/3 – Troubleshooting - Part 3/3 – librados client)
Assumptions
- Ceph version: 0.79
- Installation with ceph-deploy
- Operating system for the Ceph nodes: Ubuntu 14.04
Cluster architecture
In a minimum Ceph deployment, a Ceph cluster includes one Ceph monitor (MON) and a number of Object Storage Devices (OSD).
Administrative and control operations are issued from an admin node, which must not necessarily be separated from the Ceph cluster (e.g., the monitor node can also act as the admin node). Metadata server nodes (MDS) are required only for Ceph Filesystem (Ceph Block Devices and Ceph Object Storage do not use MDS).
Preparing the storage
WARNING: preparing the storage for Ceph means to delete a disk’s partition table and lose all its data. Proceed only if you know exactly what you are doing!
Ceph will need some physical storage to be used as Object Storage Devices (OSD) and Journal. As the project documentation recommends, for better performance, the Journal should be on a separate drive than the OSD. Ceph supports ext4, btrfs and xfs. I tried setting up clusters with both btrfs and xfs, however I could achieve stable results only with xfs, so I will refer to this latter.
- Prepare a GPT partition table (I have observed stability issues when using a dospartition)
$ sudo parted /dev/sd<x> (parted) mklabel gpt (parted) mkpart primary xfs 0 100% (parted) quit
if parted complains about alignment issues (“Warning: The resulting partition is not properly aligned for best performance”), check this two links to find a solution: 1 and2.
- Format the disk with xfs (you might need to install xfs tools with sudo apt-get install xfsprogs)
$ sudo mkfs.xfs /dev/sd<x>1
- Create a Journal partition (raw/unformatted)
$ sudo parted /dev/sd<y> (parted) mklabel gpt (parted) mkpart primary 0 100%
Install Ceph deploy
The ceph-deploy tool must only be installed on the admin node. Access to the other nodes for configuration purposes will be handled by ceph-deploy over SSH (with keys).
- Add Ceph repository to your apt configuration, replace {ceph-stable-release}with the Ceph release name that you want to install (e.g., emperor, firefly, …)
$ echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
- Install the trusted key with
$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
- If there is no repository for your Ubuntu version, you can try to select the newest one available by manually editing the file /etc/apt/sources.list.d/ceph.list and changing the Ubuntu codename (e.g., trusty -> raring)
$ deb http://ceph.com/debian-emperor raring main
- Install ceph-deploy
$ sudo apt-get update $ sudo apt-get install ceph-deploy
Setup the admin node
Each Ceph node will be setup with an user having passwordless sudo permissions and each node will store the public key of the admin node to allow for passwordless SSH access. With this configuration, ceph-deploy will be able to install and configure every node of the cluster.
NOTE: the hostnames (i.e., the output of hostname -s) must match the Ceph node names!
- [optional] Create a dedicated user for cluster administration (this is particularly useful if the admin node is part of the Ceph cluster)
$ sudo useradd -d /home/cluster-admin -m cluster-admin -s /bin/bash
then set a password and switch to the new user
$ sudo passwd cluster-admin $ su cluster-admin
- Install SSH server on all the cluster nodes (even if a cluster node is also an admin node)
$ sudo apt-get install openssh-server
- Add a ceph user on each Ceph cluster node (even if a cluster node is also an admin node) and give it passwordless sudo permissions
$ sudo useradd -d /home/ceph -m ceph -s /bin/bash $ sudo passwd ceph <Enter password> $ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph $ sudo chmod 0440 /etc/sudoers.d/ceph
- Edit the /etc/hosts file to add mappings to the cluster nodes. Example:
$ cat /etc/hosts 127.0.0.1 localhost 192.168.58.2 mon0 192.168.58.3 osd0 192.168.58.4 osd1
to enable dns resolution with the hosts file, install dnsmasq
$ sudo apt-get install dnsmasq
- Generate a public key for the admin user and install it on every ceph nodes
$ ssh-keygen $ ssh-copy-id ceph@mon0 $ ssh-copy-id ceph@osd0 $ ssh-copy-id ceph@osd1
- Setup an SSH access configuration by editing the .ssh/config file. Example:
Host osd0 Hostname osd0 User ceph Host osd1 Hostname osd1 User ceph Host mon0 Hostname mon0 User ceph
- Before proceeding, check that ping and host commands work for each node
$ ping mon0 $ ping osd0 ... $ host osd0 $ host osd1
Setup the cluster
Administration of the cluster is done entirely from the admin node.
- Move to a dedicated directory to collect the files that ceph-deploy will generate. This will be the working directory for any further use of ceph-deploy
$ mkdir ceph-cluster $ cd ceph-cluster
- Deploy the monitor node(s) – replace mon0 with the list of hostnames of the initial monitor nodes
$ ceph-deploy new mon0 [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy new mon0 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][DEBUG ] Resolving host mon0 [ceph_deploy.new][DEBUG ] Monitor mon0 at 192.168.58.2 [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph_deploy.new][DEBUG ] Monitor initial members are ['mon0'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.58.2'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
- Add a public network entry in the ceph.conf file if you have separate public and cluster networks (check the network configuration reference)
public network = {ip-address}/{netmask}
- Install ceph in all the nodes of the cluster. Use the --no-adjust-repos option if you are using different apt configurations for ceph. NOTE: you may need to confirm the authenticity of the hosts if your accessing them on SSH for the first time!
Example (replace mon0 osd0 osd1 with your node names):$ ceph-deploy install --no-adjust-repos mon0 osd0 osd1
- Create monitor and gather keys
$ ceph-deploy mon create-initial
- The content of the working directory after this step should look like
cadm@mon0:~/my-cluster$ ls ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph.conf ceph.log ceph.mon.keyring release.asc
Prepare OSDs and OSD Daemons
When deploying OSDs, consider that a single node can run multiple OSD Daemons and that the journal partition should be on a separate drive than the OSD for better performance.
- List disks on a node (replace osd0 with the name of your storage node(s))
$ ceph-deploy disk list osd0
This command is also useful for diagnostics: when an OSD is correctly mounted on Ceph, you should see entries similar to this one in the output:
[ceph-osd1][DEBUG ] /dev/sdb : [ceph-osd1][DEBUG ] /dev/sdb1 other, xfs, mounted on /var/lib/ceph/osd/ceph-0
- If you haven’t already prepared your storage, or if you want to reformat a partition, use the zap command (WARNING: this will erase the partition)
$ ceph-deploy disk zap --fs-type xfs osd0:/dev/sd<x>1
- Prepare and activate the disks (ceph-deploy also has a create command that should combine this two operations together, but for some reason it was not working for me). In this example, we are using /dev/sd<x>1 as OSD and /dev/sd<y>2 as journal on two different nodes, osd0 and osd1
$ ceph-deploy osd prepare osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2 $ ceph-deploy osd activate osd0:/dev/sd<x>1:/dev/sd<y>2 osd1:/dev/sd<x>1:/dev/sd<y>2
Final steps
Now we need to copy the cluster configuration to all nodes and check the operational status of our Ceph deployment.
- Copy keys and configuration files, (replace mon0 osd0 osd1 with the name of your Ceph nodes)
$ ceph-deploy admin mon0 osd0 osd1
- Ensure proper permissions for admin keyring
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
- Check the Ceph status and health
$ ceph health $ ceph status
If, at this point, the reported health of your cluster is HEALTH_OK, then most of the work is done. Otherwise, try to check the troubleshooting part of this tutorial.
Revert installation
There are useful commands to purge the Ceph installation and configuration from every node so that one can start over again from a clean state.
This will remove Ceph configuration and keys
ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys
This will also remove Ceph packages
ceph-deploy purge {ceph-node} [{ceph-node}]
Before getting a healthy Ceph cluster I had to purge and reinstall many times, cycling between the “Setup the cluster”, “Prepare OSDs and OSD Daemons” and “Final steps” parts multiple times, while removing every warning that ceph-deploy was reporting.
来源:oschina
链接:https://my.oschina.net/u/1172885/blog/296907