Hybrid Hacker

👾 Technical notes and hybrid geekeries

Single node Ceph cluster on CentOS 7 with VirtualBox

  • Sun 17 January 2016
  • Cloud

If you work with cloud, you know that nowadays Ceph is the standard when you talk about distributed file systems. The only good alternative imho is GlusterFS, but today I wanna talk about installing a single node Ceph on CentoOS 7 using Virtualbox. This is for testing purposes only. I repeat. For testing purposes only. Having a single node Ceph cluster is a nonsense, but could be useful to take confidence for the first time with this environment. Also, if you have a small amount of resources on your local development environment, you can choose a single node installation to simulate a Ceph environment with only one node.

REQUIREMENTS

Before starting the Ceph deploy, ensure to create 2 VMs on Vistualbox with the following characteristics:

  • at least 2 cpus
  • at least 2GB of RAM
  • two networking cards, one for NAT and another Host-only Adapter with a network assigned (in this example I've chosen 10.10.10.0/24 network)
  • one disk each, with at least 50GB of space
  • a clean install of CentOS 7 with one machine configured as 10.10.10.100 and the other with address 10.10.10.101 (in addition to the dhcp only NAT adapter)

CEPH NODE (10.10.10.101)

setenforce 0
sed -i  s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config

firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
firewall-cmd --reload

sudo useradd -d /home/ceph-admin -m ceph-admin -s /bin/bash
sudo passwd ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo visudo # localt "Defaults requiretty"  and change it to "Defaults:ceph !requiretty"

sudo chmod 0440 /etc/sudoers.d/ceph-admin

ADMIN NODE (10.10.10.100)

setenforce 0
sed -i  s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config

tee /etc/yum.repos.d/ceph-deploy.repo > /dev/null <<EOF
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
EOF

sudo yum update && sudo yum install -y ceph-deploy ntp ntpdate ntp-doc openssh-server ceph-common ceph-mds

echo "10.10.10.100 admin-node" >> /etc/hosts
echo "10.10.10.101 ceph-node" >> /etc/hosts

sudo useradd -d /home/ceph-admin -m ceph-admin -s /bin/bash
sudo passwd ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin

su ceph-admin

# Generate ssh key without passphrase
ssh-keygen
ssh ceph-admin@ceph-node

tee ~/.ssh/config > /dev/null <<EOF
Host ceph-node
    Hostname ceph-node
    User ceph-admin
EOF
chmod 600 ~/.ssh/config


mkdir ~/ceph-cluster && cd ~/ceph-cluster

ceph-deploy new ceph-node

echo "osd pool default size = 1" >> ceph.conf
echo "public network = 10.10.10.0/24" >> ceph.conf

ceph-deploy install ceph-admin ceph-node
ceph-deploy mon create-initial
ceph-deploy gatherkeys ceph-node

ssh ceph-node
sudo mkdir /var/local/osd0

exit

ceph-deploy osd prepare ceph-node:/var/local/osd0
ceph-deploy osd activate ceph-node:/var/local/osd0
ceph-deploy admin ceph-admin ceph-node
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

ceph health

rbd create foo --size 4096 -m ceph-node -k ceph.client.admin.keyring
sudo rbd map foo --name client.admin -m ceph-node -k ceph.client.admin.keyring
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
sudo mkdir /mnt/ceph-block-device

Following some useful commands to debug your cluster.

ceph health
ceph -w
ceph quorum_status
ceph -m ceph-node mon_status
ceph osd stat
ceph mon dump