Ceph Mimic Quickstart

Ceph Release

The Mimic release, though only a month old, was chosen for multiple reasons. These are the most important:

  • Latest LTS release
  • Bluestore - newer more efficient (and performant) storage backend
  • Bundled management console (based on OpenAttic)

Deployment Method

Several deployment methods were tested. The completely manual method was clean and fairly simple, but I ended up writing wrappers to make things simpler in the future. In the end I chose to use the stable-3.1 branch of the ceph-ansible project for the following reasons:

  • sanctioned (and hosted) by the ceph.com folks themselves
  • based on ansible, which I am already use for plenty of other things
  • updated frequently
  • already supports latest mimic release with a stable branch
  • enables relatively simple cluster growth by adding nodes later

This comes fairly close to my source-plus-wrappers method, but is more portable and maintained by someone else.

Base Configuration

The installation is a cluster of 3 servers running CentOS 7.5; other versions and distributions should be nearly idential in operation. A single user, __ceph-admin, was created on each node and given the ability to run commands with root access via sudo. A single ssh key pair was created and associated with that user on each host; that allows the __ceph-admin user to login via ssh to any of the other nodes without a password. These two things are what allows the deployment to work smoothly, but the private key really only needs to be present on the machine being used for deployment.

Preparation for Ceph Deployment

In order to make this deployment method as portable as possible, and to keep from polluting the OS installation itself, I used a python virtual environment for python prerequisites and git for the ceph-ansible project. This method should allow the deployment machine to be nearly any unix-like operating system, including Mac OS X or OpenBSD.

{
  # become the deployment user
  sudo su - __ceph-admin

  # environment prerequisites
  sudo yum install -y python2-virtualenv git

  # pull down the ceph-ansible project
  git clone https://github.com/ceph/ceph-ansible.git

  # create and activate the virtual environment
  virtualenv ceph-openstack-python
  . ceph-openstack-python/bin/activate

  # update pip, and install requirements listed by the ceph-ansible project
  # as of this writing, the stable branch does not contain a copy of the requirements.txt file
  pip install --upgrade pip
  pip install -r ceph-ansible/requirements.txt

  # switch to the stable branch
  cd ceph-ansible
  git checkout stable-3.1
}

Create Inventory

{
cat > ceph-cluster.inventory <<!
[ceph1nodes]
cephnode1
cephnode2
cephnode3

[mons:children]
ceph1nodes

[osds:children]
ceph1nodes

[mgrs:children]
ceph1nodes
!
}

Set Cluster Preferences

{
cat > group_vars/all.yml <<!
---
ceph_origin: repository
ceph_repository: community
ceph_stable_release: mimic

monitor_interface: pubnic0
radosgw_interface: pubnic0

public_network:  "10.21.216.0/22" # network via pubnic0
cluster_network: "10.21.220.0/22" # network via privnic0

# this config will create 3 osds, each
# using part of /dev/ssd0 for journaling

osd_objectstore: bluestore
osd_scenario: non-collocated

devices:
  - '/dev/sda'
  - '/dev/sdb'
  - '/dev/sdc'

dedicated_devices:
  - '/dev/ssd0'
  - '/dev/ssd0'
  - '/dev/ssd0'
!
}

Deploy the Ceph Cluster

cp site.yml.sample site.yml

time ansible-playbook -i ceph-cluster.inventory site.yml

# check cluster health
sudo ceph -s

# probably not needed for fresh install
sudo ceph osd crush tunables optimal

Enable Dashboard

# enable ceph manager dashboard, new to mimic release
# http://docs.ceph.com/docs/mimic/mgr/dashboard/#enabling
sudo su -
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph mgr module disable dashboard
ceph mgr module enable dashboard
ceph dashboard set-login-credentials <username> <password>

# check dashboard is enabled
ceph mgr services

Create Pools

# create the pools as recommended by the official ceph calc tool
# for block storage -- use your own settings!
# https://ceph.com/pgcalc/
ceph osd pool create volumes-backup 512
ceph osd pool set volumes-backup size 3
while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done

ceph osd pool create volumes 1024
ceph osd pool set volumes size 3
while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done

ceph osd pool create vms 512
ceph osd pool set vms size 3
while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done

ceph osd pool create images 128
ceph osd pool set images size 3
while [ $(ceph -s | grep creating -c) -gt 0 ]; do echo -n .;sleep 1; done

# tag the pools
ceph osd pool application enable vms rbd
ceph osd pool application enable images rbd
ceph osd pool application enable volumes rbd
ceph osd pool application enable volumes-backup rbd

# create example clients and their permissions
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volume-backups'
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'