Spectrum Scale with Openshift

IBM Spectrum Scale is one of many Scale-Out filesystem that is a great complement with Kubernetes and Openshift. 

In this technical blog are we showing how you can install IBM Spectrum Scale with your Openshift Cluster.

By using IBM Spectrum Scale or IBM ESS you can easy provide NVMe/SSD or HDD storage for your Kubernetes cluster.

Cristie_Spectrum_Scale_Openshift_DesignIf you want to use Spectrum Scale or ESS as a Presistent Volume storage for RedHat Openshift 4.x you must have worker node based on RedHat Enterprise Linux 7.6 or higher and not CoreOS Worker that normally are default installed with Openshift 4.x

How to install RHEL worker Nodes in to an existing Openshift 4.x cluster please use following blog we have written for you.

 

Install Spectrum Scale on Worker Node

This is only a recommendation but try to always keep your systems update to date before you even start installing Spectrum Scale on your systems.

$ sudo yum update -y
$ sudo reboot

If you can’t update and reboot, it’s still ok to continue with this guide as long it’s not to old, continue with following step to start installing Spectrum Scale Client on your Worker node.

$ sudo yum install unzip ksh perl libaio.x86_64 net-tools m4 gcc-c++ psmisc.x86_64 "kernel-devel-uname-r == $(uname -r)" -y

If the IBM Spectrum Scale Client is in tar.gz format you need to extract that file.

$ tar zxvf Scale_dme_install-5.0.5.0_x86_64.tar.gz

From the installation file you must extra all the RPMs from the installation software.

$ sudo ./Spectrum_Scale_Data_Management-5.0.5.0-x86_64-Linux-install --text-only --silent

Go to the Spectrum Scale RPM directory and install the minimum to get the Spectrum Scale client to work.

$ cd /usr/lpp/mmfs/5.0.5.0/gpfs_rpms
$ sudo yum install gpfs.base*.rpm gpfs.gpl*rpm gpfs.license.adv*.rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.adv*rpm gpfs.crypto*rpm -y

Spectrum Scale have now create a really nice small binary that do all the compiling for you, and if you are missing anything it will automatically cancel.

$ sudo /usr/lpp/mmfs/bin/mmbuildgpl
$ sudo /usr/lpp/mmfs/bin/mmbuildgpl
--------------------------------------------------------
mmbuildgpl: Building GPL (5.0.5.0) module begins at Mon Aug 24 11:43:34 CEST 2020.
--------------------------------------------------------
Verifying Kernel Header...
kernel version = 31000999 (310001127019001, 3.10.0-1127.19.1.el7.x86_64, 3.10.0-1127.19.1)
module include dir = /lib/modules/3.10.0-1127.19.1.el7.x86_64/build/include
module build dir = /lib/modules/3.10.0-1127.19.1.el7.x86_64/build
kernel source dir = /usr/src/linux-3.10.0-1127.19.1.el7.x86_64/include
Found valid kernel header file under /usr/src/kernels/3.10.0-1127.19.1.el7.x86_64/include
Verifying Compiler...
make is present at /bin/make
cpp is present at /bin/cpp
gcc is present at /bin/gcc
g++ is present at /bin/g++
ld is present at /bin/ld
Verifying Additional System Headers...
Verifying kernel-headers is installed ...
Command: /bin/rpm -q kernel-headers
The required package kernel-headers is installed
make World ...
make InstallImages ...
--------------------------------------------------------
mmbuildgpl: Building GPL module completed successfully at Mon Aug 24 11:44:02 CEST 2020.
--------------------------------------------------------

Now can you add the Spectrum Scale Bin directory to your profile, this will make it easy for you to continue with you your operations. 

PATH=$PATH:$HOME/.local/bin:$HOME/bin:/usr/lpp/mmfs/bin

 

Add worker node to the Spectrum Scale cluster

In my case I want to add my Nodes to an existing Spectrum Scale Cluster, and I need to run following commands to get the nodes part of my Spectrum Scale Cluster from the Spectrum Scale Server Nodes. 

$ sshpass -p '<password>' ssh-copy-id root@worker.dns.domain.com
$ mmaddnode worker.dns.domain.com
$ mmchlicense client worker.dns.domain.com --accept

 

Let's name the worker nodes Label your Spectrum Scale Nodes

$ kubectl label node <worker node> scale=true --overwrite=true

 

Install Spectrum Scale CSI Operator Driver

Login to the Spectrum Scale Node that has the WebUI installed and run the following command.

$ /usr/lpp/mmfs/gui/cli/mkuser <csiDriverUser> -p <csiDriverUserPassword> -g CsiAdmin

Login to the machine that has oc and kubectl installed, it could be your local PC/Mac.
Now let’s export all variables so we eliminating any misspelling and make our life easier. 

$ export USERNAME=csiDriverUser               ### Change this line
$ export PASSWORD=csiDriverUserPassword       ### Change this line

$ export SCALE_GUI_DNS=”scalegui.cristie.se”  ### Change this line
$ export HOST_PATH=/gpfs/k8s                  ### Change this line
$ export PRIMARYFS=k8s
$ export USERNAME_B64=$(echo $USERNAME | base64)
$ export PASSWORD_B64=$(echo $PASSWORD | base64)
$ export CSIOPERATOR="ibm-spectrum-scale-csi-driver"

Let's test our connection and see if it works.


$ curl --insecure -u $USERNAME:$PASSWORD -X GET https://$SCALE_GUI_DNS:443/scalemgmt/v2/cluster

{
 "cluster" : {
  "clusterSummary" : {
  "clusterId" : 17258972170939727157,
  "clusterName" : "node10.node10",
  "primaryServer" : "node10",
  "rcpPath" : "/usr/bin/scp",
  "rcpSudoWrapper" : false,
  "repositoryType" : "CCR",
  "rshPath" : "/usr/bin/ssh",
  "rshSudoWrapper" : false,
  "uidDomain" : "node10.node10"
 },
 "capacityLicensing" : {
 "liableCapacity" : 96636764160,
 "liableNsdCount" : 2,
 "liableNsds" : [ {
   "nsdName" : "nsd1",
   "liableCapacity" : 53687091200
  },
  {
   "nsdName" : "nsd2",
   "liableCapacity" : 42949672960
  }]
 }
},
 "status" : {
 "code" : 200,
 "message" : "The request finished successfully."
 }
}

If you got another error, please verify the URL, Username, Password and that Spectrum Scale WebUI is up and running. 

Let’s only grep the Cluster ID that we need later when we create the yaml files.

$ curl --insecure -u $USERNAME:$PASSWORD -X GET https://$SCALE_GUI_DNS:443/scalemgmt/v2/cluster | grep clusterId

"clusterId" : 17258972170939727157,

Copy the Cluster ID and export that as a variable.

$ export CLUSTERID=”17258972170939727157”

Now let's create our IBM Spectrum Scale CSI Operator

$ oc new-project $CSIOPERATOR


Create your secret file with Username and password to communicate with Spectrum Scale Restful API.

cat << EOF > ./csisecret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: csisecret # This should be in your CSIScaleOperator definition
  namespace: ${CSIOPERATOR}
data:
  password: ${PASSWORD_B64}
  username: ${USERNAME_B64}

type: Opaque
labels:
  app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
EOF

Apply your secret yaml file.

$ oc apply -f csisecret.yaml

Login to Openshift WebUI and deploy the IBM Spectrum Scale CSI Operator that you can find under Administrator -> Operators -> OperatorHub

And because you already have create the user in Scale and apply the secret you can just deploy the operator.

Openshift 1

 

Now let's deploy the Spectrum Scale Driver so we can create our Storage Class and Persistent Volume.

cat << EOF > ./ibm-spectrum-scale-csi-driver.yaml
apiVersion: csi.ibm.com/v1
kind: CSIScaleOperator
metadata:
  name: ibm-spectrum-scale-csi
  labels:
    release: ibm-spectrum-scale-csi-operator
    app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
    app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
    app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
spec:
  provisionerNodeSelector:
  - key: scale
    value: 'true'
  clusters:
  - secrets: csisecret
    restApi:
    - guiHost: ${SCALE_GUI_DNS}
    secureSslMode: false
    primary:
      primaryFs: ${PRIMARYFS}
    id: '${CLUSTERID}'
  scaleHostpath: ${HOST_PATH}
  pluginNodeSelector:
  - key: scale
    value: 'true'
  attacherNodeSelector:
  - key: scale
    value: 'true'
EOF

Now let's apply everything and let the Spectrum Scale CSI Operator do all the work for us. 

$ oc apply -f ibm-spectrum-scale-csi-operator.yaml

Test our Spectrum Scale CSI Driver

Now let's test our CSI driver and create our StorageClass and Claim a Volume to use.

cat << EOF > ./storageclass-csi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ibm-scale
parameters:
  volBackendFs: ${PRIMARYFS}
  volDirBasePath: pv/pvc               ### Change this to your fileset name in Scale
provisioner: spectrumscale.csi.ibm.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF

cat << EOF > ./pwc-test.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: scale-test
EOF

Now should we be able to create our StorageClass and Persistent Volume.

$ oc apply -f storageclass-csi.yaml
$ oc apply -f pwc-test.yaml

We can easy verify that the volume have been created for us.

$ oc get pvc
NAME           STATUS    VOLUME                                  CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-cliam     Bound     pvc-aaac88ab-42be-479f-a5d6-b2714fb8f1a2   1Gi       RWO               scale-test     1m

I hope you found this blog post useful

#AtYourService
Christian Petersson

Subscribe to blog