If you have ever tried to deploy Kubernetes in on-premise nodes, you probably know it's not a easy procedure to have a fully functional k8s deployment.

Persistence is the worst part in my opinion, however it's easy to create fixed and attached directories to a certain node. This approach lacks of functionality and requires a lot of manual steps when creating new deployments into the k8s namespaces.

Creating SSH keys for authentication

Execute the following command in order to create the new ssh keys:

ssh-keygen -t rsa

both public and private keys should be located at $HOME/.ssh

The public key now has to be copied to the nodes we want to deploy GlusterFS + Heketi with the ssh-copy-id command:

ssh-copy-id <username>@<node-fqdn>

You will be prompted to enter the username password and then, the ssh public key will be copied to the node making it available for future logins.

To test it, just try to login to your server via SSH:

ssh <username>@<node-fqdn>

If everything went well, you will be logged in immediately.

Now copy the id_rsa to /etc/heketi/heketi_key

Prepairing the disk drives

GlusterFS can work with emulated disk drives using loop devices, but this approach it's not recommended due to automatic mounting problems, so this topology will not be covered in this post.

You can use GlusterFS with RAW unformatted disk drives or with LVM partitions, both will be discussed in this entry.

Raw Disk Drives

This approach is the easiest, but also the expensier because you will need a secondary hard disk drive.

Once you have the drives, plug them into the physical nodes and make sure they are recognised succesfully by the OS.

Now, it's time to erase the contents and partitions in the filesystem,  this will destroy your data!!

wipefs -a /dev/<drive>

LVM partitions

If you already have LVM partitions into your system drives, this aproach is simply, you only need to have enough space in the drive in order to create a new logical volume.

This can be achieved resizing & reducing existing partitions [Search how to reduce LVM partition]

And then create a new logical volume:

lvcreate -L <size>G -n <logical_volume_name> <existing_volumegroup_to_use>

Now, you are ready to install Gluster!

Take care /etc/fstab doesn't contain any entry to your new disks! (it shouldn't)

Installing GlusterFS

Probably, your Linux distribution already contains the packages for GlusterFS in their official repositories, but I'm pretty sure heketi it's not included on them. Anyway, it's better to use official Gluster repositories.

So, let's add the Gluster repositories to our system!

Just follow the official docs from Gluster: https://docs.gluster.org/en/v3/Install-Guide/Install

Installing Heketi & Heketi-cli

Heketi should be installed in your main machines, in this post we are installing it standalone, so, this installation, doesn't provide us redundancy, if you want to be deployed with fault-tolerance, it should be deployed into an existing k8s cluster.

I'm going to deploy it in my Kubernets & etcd master node

For debian:

apt install heketi heketi-cli

For RHEL-based distros:

yum install heketi heketi-cli

Heketi important paths

  • /var/lib/heketi/ This path contains heketi.db, this path is useful for deleting the existing configuration for fresh starts if anything went wrong during setup.
  • /etc/heketi/heketi.json Contains heketi server configuration, like port, ssh key for authentication etc...
  • /usr/share/heketi/ In this path you can find the topology.json file

Setting up the topology

We need to take care of some things

  • storage IP instead FQDN
  • zone Depending on your redundancy concerns, They are availability zones. If you're not sure, just put only 1 zone.
  • devices Array containing already prepared devices
{
 "clusters": [
   {
     "nodes": [
       {
         "node": {
           "hostnames": {
             "manage": [
               "ivy.domain.com"
             ],
             "storage": [
               "x.x.x.x"
             ]
           },
           "zone": 1
         },
         "devices": [
           "/dev/mapper/centos_ivy-gluster"
         ]
       },
       {
         "node": {
           "hostnames": {
             "manage": [
               "bizarro.domain.com"
             ],
             "storage": [
               "x.x.x.x"
             ]
           },
           "zone": 1
         },
         "devices": [
           "/dev/sdb"
         ]
       }
    ]
    }
    ]
}

Loading topology in heketi

At this point, GlusterFS must be already installed in the nodes, and they should be reachable between them.

First, make sure heketi server is already running

systemctl start heketi && systemctl enable heketi

Now, load the topology:

heketi-cli --server http://saitama.domain.com:9005 --user root topology load --json="/usr/share/heketi/topology.json"

NOTE: Port 9005 it's not the default one

 heketi heketi-cli --server http://saitama.domain.com:9005 --user root topology load --json="/usr/share/heketi/topology.json"
Creating cluster ... ID: 47fead3f7ca519f4b0940a7c8762f991
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node ivy.domain.com ... ID: 80ea81822725c9ecd80b935c2af470eb
                Adding device /dev/mapper/centos_ivy-gluster ... Unable to add device: WARNING: xfs signature detected on /dev/mapper/centos_ivy-gluster at offset 0. Wipe it? [y/n]: [n]
  Aborted wiping of xfs.
  1 existing signature left on the device.
        Creating node deadpool.domain.com ... ID: ef5b40f43096315232213890a024271b
                Adding device /dev/sdb ... OK
        Creating node bizarro.domain.com ... ID: 0b5ac5e86ec4fe2ce21dd1af193a05b6
                Adding device /dev/sdb ... OK

In the previous message log, the node ivy already had a filesystem, so the cluster hasn't been formed properly, we need to wipe the File System & relaunch te command:

wipefs -a /dev/centos_ivy/gluster

heketi heketi-cli --server http://saitama.domain.com:9005 --user root topology load --json="/usr/share/heketi/topology.json"

Now, we have a fully functional Gluster + Heketi cluster!

➜  heketi heketi-cli -s http://localhost:9005 cluster info 47fead3f7ca519f4b0940a7c8762f991
Cluster id: 47fead3f7ca519f4b0940a7c8762f991
Nodes:
0b5ac5e86ec4fe2ce21dd1af193a05b6
80ea81822725c9ecd80b935c2af470eb
ef5b40f43096315232213890a024271b
Volumes:

Block: true

File: true

Registering Persistent Volume Claim to k8s

You will need know the cluster id, in order to configure it in Kubernetes:

heketi-cli -s <server_url_port> cluster list

Now, register the PVC:

kubectl apply -f "gluster_pvc.yml"

With contents:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: gluster
provisioner: kubernetes.io/glusterfs  
 parameters:
  resturl: "http://saitama.domain.com:9005"  
  restuser: "rest_user"  
  restuserkey: "rest_user_key"  
  volumetype: "replicate:3"
  clusterid: "47fead3f7ca519f4b0940a7c8762f991"

You can find more infor here: https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs