Hybrid Hacker

👾 Technical notes and hybrid geekeries

Single node Ceph cluster on CentOS 7 with VirtualBox

  • Sun 17 January 2016
  • Cloud

If you work with cloud, you know that nowadays Ceph is the standard when you talk about distributed file systems. The only good alternative imho is GlusterFS, but today I wanna talk about installing a single node Ceph on CentoOS 7 using Virtualbox. This is for testing purposes only. I repeat. For testing purposes only. Having a single node Ceph cluster is a nonsense, but could be useful to take confidence for the first time with this environment. Also, if you have a small amount of resources on your local development environment, you can choose a single node installation to simulate a Ceph environment with only one node.


Before starting the Ceph deploy, ensure to create 2 VMs on Vistualbox with the following characteristics:

  • at least 2 cpus
  • at least 2GB of RAM
  • two networking cards, one for NAT and another Host-only Adapter with a network assigned (in this example I've chosen network)
  • one disk each, with at least 50GB of space
  • a clean install of CentOS 7 with one machine configured as and the other with address (in addition to the dhcp only NAT adapter)


setenforce 0
sed -i  s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config

firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
firewall-cmd --reload

sudo useradd -d /home/ceph-admin -m ceph-admin -s /bin/bash
sudo passwd ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo visudo # localt "Defaults requiretty"  and change it to "Defaults:ceph !requiretty"

sudo chmod 0440 /etc/sudoers.d/ceph-admin


setenforce 0
sed -i  s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config

tee /etc/yum.repos.d/ceph-deploy.repo > /dev/null <<EOF
name=Ceph noarch packages

sudo yum update && sudo yum install -y ceph-deploy ntp ntpdate ntp-doc openssh-server ceph-common ceph-mds

echo " admin-node" >> /etc/hosts
echo " ceph-node" >> /etc/hosts

sudo useradd -d /home/ceph-admin -m ceph-admin -s /bin/bash
sudo passwd ceph-admin
echo "ceph-admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin

su ceph-admin

# Generate ssh key without passphrase
ssh ceph-admin@ceph-node

tee ~/.ssh/config > /dev/null <<EOF
Host ceph-node
    Hostname ceph-node
    User ceph-admin
chmod 600 ~/.ssh/config

mkdir ~/ceph-cluster && cd ~/ceph-cluster

ceph-deploy new ceph-node

echo "osd pool default size = 1" >> ceph.conf
echo "public network =" >> ceph.conf

ceph-deploy install ceph-admin ceph-node
ceph-deploy mon create-initial
ceph-deploy gatherkeys ceph-node

ssh ceph-node
sudo mkdir /var/local/osd0


ceph-deploy osd prepare ceph-node:/var/local/osd0
ceph-deploy osd activate ceph-node:/var/local/osd0
ceph-deploy admin ceph-admin ceph-node
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

ceph health

rbd create foo --size 4096 -m ceph-node -k ceph.client.admin.keyring
sudo rbd map foo --name client.admin -m ceph-node -k ceph.client.admin.keyring
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
sudo mkdir /mnt/ceph-block-device

Following some useful commands to debug your cluster.

ceph health
ceph -w
ceph quorum_status
ceph -m ceph-node mon_status
ceph osd stat
ceph mon dump