Two red hat systems are required to install a two node conga cluster & to create a cluster to have High Availability of a Apache Web Server.
Both servers have luci service (can be installed using yum) running using a shared storage via GFS having a shared storage accessible(SAN Fibre Channel or iSCSI) from the two system
Installation Process
create a cluster.repo file in /etc/yum.repos.d
#vi /etc/yum.repos.d/cluster.repo
[Server]
Name=twonodeCluster
baseurl=file:///misc/cd/Server
enabled=1
gpgcheck=0
save and exit
start the autofs service
#service autofs restart
have the RHEL media on you CD/DVD Reader & update yum database
#yum update
install packages for cluster
#yum groupinstall -y “Cluster Storage” “Clustering”
install the following packages if you have iSCSI initiator
#yum install -y iscsi-initiator-utils isns-utils
configure it to start at boot :
#chkconfig iscsi on
#chkconfig iscsid on
#service iscsi start
#service iscsid start
Three systems are used for this setup
The two “cluster-nodes” systems with two network card( one for Production Environment and another network card for High availability check).
rhel-cluster-node1
192.168.100.101 ( Production Environment)
10.10.10.1 ( HA Check)
rhel-cluster-node2
192.168.100.102 ( Production Environment)
10.10.10.2 ( HA Check)
rhel-cluster-SAN
192.168.100.103
create a cluster with 192.168.100.100 IP which shares the services from 192.168.100.101 and 192.168.100.102 nodes, and use a GFS filesystem reachable with iSCSI on 192.168.100.103 .
iSCSI target on the SAN should be configured to run the command & check login to the shared LUN :
#iscsiadm -m discovery -t st -p 192.168.100.103
#iscsiadm -m node -L all
#vi /etc/iscsi/send_targets
192.168.100.103
save and exit
On both cluster nodes add the following lines in /etc/hosts :
10.10.10.1 rhel-cluster-node1.mgmt.local rhel-cluster-node1
10.10.10.2 rhel-cluster-node2.mgmt.local rhel-cluster-node2
check the iSCSI mapped device is /dev/sdb (otherwise change the following commands), then proceed creating a new Physical Volume, a new Volume Group and a new Logical Volume to use as a shared storage for cluster nodes, by using the following commands :
#pvcreate /dev/sdb
#vgcreate vg1 /dev/sdb
#lvcreate -l 10239 -n lv0 vg1
create a new volume group “vg1″ and a new logical volume “lv0″. The “-l 10239″ parameter is based on the size on my iSCSI shared storage, in this case 40 GB.
create the clustered GFS file system on your device using the command below :
#gfs_mkfs -p lock_dlm -t rhel-cluster:storage1 -j 8 /dev/vg1/lv0
GFS file system is created with locking protocol “lock_dlm” for a cluster called “rhel-cluster” and with name “storage1″, you can use this GFS for a maximum of 8 hosts and you’ve used the /dev/vg1/lvo device.
To administer Red Hat Clusters with Conga, run luci and ricci as follows :
#service luci start
#service ricci start
startup ricci and luci on both systems at start up
#chkconfig luci on
#chkconfig ricci on
initialize the luci server using the luci_admin init command on both the servers
#service luci stop
#luci_admin init
” luci_admin init” will create the ‘admin’ user and its password
#service luci restart
#chkconfig rgmanager on
#service rgmanager start
#chkconfig cman on
#service cman start
Edit fstab and add
/dev/vg1/lv0 /data gfs defaults,acl 0 0
check
#mount -a
configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf
#vi /etc/httpd/conf/httpd.conf
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
save and exit
create two directories under /data,
#mkdir /data/websites
#mkdir /data/websites/default
create an index file
#vi /data/websites/default/index.html
Cluster Nodes Conf…..
apache set to start at boot time
#chkconfig httpd on
#service httpd start
check in browser to access luci
https://rhel-cluster-node1:8084
1. select the cluster tab.
2. Click Create a New Cluster.
3. At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
4. Click Submit to download, install, cofigure & start cluster software in each node
Add a resource,
choose IP Address and use 192.168.100.100
Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
Save the service.
enable it, and start it on one cluster node.
check cluster configuration file
#cat /etc/cluster/cluster.conf
check shared IP Address
#/sbin/ip addr list
Check the Cluster
shutdown/unplug one host from network & you should be able to see the website still on 192.168.100.100 is reachable.