Cent / Redhat 7OS(LINUX) PCS CLUSTER LINUX
Enviroments
- OS:
Version : Cent OS 7.3
# Private (heartbeat)
Svr1 : 192.168.56.120 (node1)
Svr2 : 192.168.56.121 (node2)
Disk : Shared storage : /dev/sdb1
Vip : 192.168.56.200
- LOG location
/var/log/cluster/corosync.log
/var/log/pcsd/pcsd.log
/var/log/pacemaker.log
Step #1 : install High Availability Cluster on both node (package)
[root@maria yum.repos.d]# yum install -y pacemaker pcs fence-agents-all
Step# 2 : Firewall service stop and disable (node1, node2)
[root@svr1 ~]# service firewall stop
Redirecting to /bin/systemctl stop firewall.service
[root@svr1 ~]# chkconfig firewalld off
Step# 3 : Configure High Availability Cluster (node1, node 2)
[root@svr2 ~]# echo "redhat" | passwd --stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully.
[root@svr2 ~]# systemctl start pcsd
[root@svr2 ~]# systemctl enable pcsd
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[root@svr2 ~]# oot@svr2 ~]# systemctl start pcsd
[root@svr2 ~]# systemctl enable pcsd
[root@svr1 ~]# pcs cluster auth svr1 svr2 -u hacluster
Password:
svr1: Authorized
svr2: Authorized
[root@svr1 ~]#
[root@svr1 ~]# pcs cluster setup --name mycluster svr1 svr2
Destroying cluster on nodes: svr1, svr2...
svr2: Stopping Cluster (pacemaker)...
svr1: Stopping Cluster (pacemaker)...
svr1: Successfully destroyed cluster
svr2: Successfully destroyed cluster
Sending 'pacemaker_remote authkey' to 'svr1', 'svr2'
svr2: successful distribution of the file 'pacemaker_remote authkey'
svr1: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
svr1: Succeeded
svr2: Succeeded
Synchronizing pcsd certificates on nodes svr1, svr2...
svr1: Success
svr2: Success
Restarting pcsd on the nodes in order to reload the certificates...
svr1: Success
svr2: Success
[root@svr1 ~]# pcs status
Cluster name: mycluster
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: svr1 (version 1.1.16-12.el7_4.2-94ff4df) - partition WITHOUT quorum
Last updated: Sun Oct 22 12:03:00 2017
Last change: Sun Oct 22 12:02:23 2017 by hacluster via crmd on svr1
2 nodes configured
0 resources configured
Online: [ svr1, svr2 ]
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@svr1 ~]#
Step #4 : Create resource for the service High Availability
- Virtual IP resource
[root@svr1 ~]# pcs resource create VirtIP IPAddr ip=192.168.56.200 cidr_netmask=24 op monitor interval=30s
[root@svr1 ~]# pcs property set stonith-enabled=false
[root@svr1 ~]# pcs property set no-quorum-policy=ignore
[root@svr1 ~]# pcs property set default-resource-stickiness="INFINITY"
- Share Disk resource
[root@svr1 ~]# pcs resource create fs_dmz_kobis Filesystem device="/dev/sdb1" directory="/var/www/html" fstype="xfs" --group sharedDisk_group
ID: hacluster
Passwd : redhat
Commad :
- Contifure remove :
root # pcs cluster stop –all
root # pcs cluster destroy --all
사업자 정보 표시
(주)블루원 | 김홍태 | 서울특별시 용산구 원효로 4가 135 금홍 2빌딩 | 사업자 등록번호 : 106-86-76684 | TEL : 02-3272-7200 | Mail : support_ora@blueone.co.kr | 통신판매신고번호 : 호 | 사이버몰의 이용약관 바로가기
'OS - LINUX' 카테고리의 다른 글
[TD-LINUX] How to collect vm core dump.docx (0) | 2018.07.17 |
---|---|
[TD-LINUX] Cent / Redhat 7 Install and Initial setting (0) | 2017.11.17 |
[TD-LINUX] configure High Availability(FARM_REPLECATION) BY SAFEKIT (0) | 2017.11.17 |
[TD-LINUX] To add a disk to an existing volume group (0) | 2017.10.18 |
[TD-LINUX] Network Bonding.docx (0) | 2017.10.18 |