How To Set-up Conga Cluster (Red Hat Cluster) on RHEL6 Using 3 Virual Machines


Conga is a server(luci) & client(ricci) based Cluster Management solution for Linux. Luci is a Cluster Manager, in simple language & offers a GUI to create and manage clusters. This tutorial requires 3 RHEL VMs (of course you can use any similar Linux distribution). 1 VM will offer shared storage through iSCSI & will also act as Cluster Manager (luci will be installed here). 2 VMs will act as cluster nodes. We can also set-up Red Hat Cluster using only 2 servers. 1 server can act as iSCSI target, Cluster Manager & cluster node. Click here to learn how to set-up iSCSI.

Below are the steps to set-up our Red Hat Cluster on RHEL6.

Lab Description : –

Cluster Manager / Shared storage  – server.shashank.com 192.168.0.1

Cluster Node1  – node1.shashank.com 192.168.0.2

Cluster Node2  – node2.shashank.com 192.168.0.3

1. Install required packages. Install luci package on VM1. Lets call this VM server (happens to be hostname of my server here). Also set luci service to start at boot time. Install ricci packages on 2 cluster nodes, node1 & node2 (these are host-names in my lab. Choose any desired name!) Also set ricci service to start at boot time.

  • yum install luci* -y
  • chkconfig luci on
  • yum install ricci* -y
  • chkconfig ricici on

2. Start & Configure required services. Start luci service on server. Once the service is started, you can point your browser to localhost:8084. Login using your root password. You can see a slightly overwhelming Luci Console 😉 Now, assign passwords for user ricci on both the nodes. We are not yet ready to create cluster here. Please read on.

3. Configure Shared Storage. Create a partition on server & export that as a LUN to both the nodes. This will be done using iSCSI & has been discussed here. Issue fdisk -l on both the nodes to confirm you have an additional disk. If you can’t seem to see that, reboot nodes.

4. Create a Quorum Disk on any node. I created it on node1. What is a quorum disk? You may ask 😉 Well, its a storage device that holds cluster configuration database. This database determines which node must be active at a given time. This is decided by votes cast by each node. Below is the command to create a quorum disk.

mkqdisk -c /device -l quorumdisk

/device is your LUN or iSCSI disk that you shared from server. Make sure you first create a small partition on LUN and then create quorum disk on that device/partition. It can be a 100 MB partition, does not need to be big. -l stands for label. It can be anything meaningful, just be sure to give it a nice name 😉 Now, check node 2 if you can see this quorum disk there or  not. If not, reboot.

5. Create a GFS partition on node1. This is what we will use to demonstrate clustering. For this, first create another partition on LUN & format it using GFS filesystem. Below is how it is done. You may need to install gfs package first, if it is not available.

mkfs.gfs2 -p lock_dlm -t shashank:GFS -j 2 /device

-p means a protocol which is lock_dlm in our case. -t is cluster name. This is the name that you will be using for your cluster in Luci (I will come to this in a shortwhile). /device is the new LUN partition, which is /dev/sdc3 in my lab. Make changes accordingly.

Now, check on node2 if you can see the GFS partition or not, If not, reboot. Its better to create quorum disk & GFS partition & then reboot both nodes to ensure proper configuration.

GFS partition

6. Create a mount-point on both nodes. Choose any name, but make sure its same on both of them. I created /gfs on both.

7. Now go back to Luci Console. Click Cluster & then Create. Give it name that you chose while creating GFS partition (step 5), its shashank in my case. Select “Use the Same Password for All Nodes“. In Node Name, enter hostname of node1 (which is node1 in my case). Type in ricci password, that we created in step 2. Click Add Another Node to add node2. Select “Enable Shared Storage Support” since we are using GFS. Now click Create Cluster to start creating your cluster. It will take some time & then you will see something similar to 2nd screenshot 😉

Potential pitfalls – Check cman & rgmanager services on both the nodes if they are running & configured to run at startup or not. If not, start them & ensure that they start at boot time. Disable NetworkManager service at startup & stop it.

Creating Cluster

Cluster Nodes Added

8. Configure Failover Domain. Now click Failover Domains & then click Add a New Failover Domain. Give it a name, choose Prioritized. Then enable both the nodes in Members & set priority to 1 for both.

9.  Configure Quorum Disk. For this, click Configure & then QDisk. Now select Use a Quorum Disk. Then select By Device Label & enter quorumdisk. This is the name of quorum disk label. Choose accordingly. Under Heuristics -> Path to Program, type ping -c2 -t1 node1. Choose Interval as 2 & Score as 1. Click Add Another Heuristic & do the same for node2 i.e. ping -c2 -t1 node2. This is a kind of test that both the nodes will take to vote. Click Apply.

10. Create Resources. For this, click Resources & then click Add. From the drop-down list, select GFS. Give it some nice name 😉 Provide Mount Point name as /GFS (this is what I created on both the nodes, choose yours accordingly). Choose Device, FS Label or, UUDI as /dev/sdc3 (again, change accordingly since /dev/sdc3 is in my set-up). Select Force Unmount option & click Submit.

11. Create a Service Group. For this, click Service Group tab & then click Add. Give it some really nice name 😉 (once again!) & ensure Automatically Start This Service option is selected. Then choose a Failover Domain from the drop-down list, that was created above. Set Recover Policy as Relocate. Now click Add Resource, select your resource that you created in step 10, from the drop-down list & click Submit.

12. Refresh browser tab & you will see our Service running on Luci Console. 

Service Running

13. Now time to test our Cluster. As we know we have 2 cluster nodes each having /gfs mount-point. In case one node fails, data in /gfs will be available from other node (which is the very meaning of clustering here). At the moment our service is running on node1. Type clustat command and see the output. You will find the details there. See that /gfs is mounted here but not on node2.

Cluster Running on Node1

Now, lets relocate this service to node2. Type

clusvcadm -r GFS -n node2

Here -r means relocate, GFS is service name & -m is for the node to relocate to.

Cluster Running on Node2

Lets got to node2 & check the status. Type clustat command & notice the output. See the available mount-points as well.

Cluster Running on Node2 After Relocation

Cluster Running on Node2 & /gfs Mounted

So, thats it 🙂 We can see /gfs mount-point is clustered between 2 nodes & is available anytime, even if any of the nodes goes down.

Advertisements

One thought on “How To Set-up Conga Cluster (Red Hat Cluster) on RHEL6 Using 3 Virual Machines

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s