易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 4112|回复: 0
收起左侧

How to make MaxScale High Available with Corosync/Pacemaker

[复制链接]
发表于 2017-12-29 13:04:38 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
MaxScale, an open-source database-centric router for MySQL and MariaDB makes High Availability possible by hiding the complexity of backends and masking failures. MaxScale itself however is a single application running in a Linux box between the client application and the databases - so how do we make MaxScale High Available? This blog post shows how to quickly setup a Pacemaker/Corosync environment and configure MaxScale as a managed cluster resource.6 h( c: k/ t9 Y( N+ p6 o- a2 p; L. F% U
Anyone following the instructions detailed here, modifying configuration files and issuing system and software checks could create a complete setup with three Linux Centos 6.5 servers and unicast heartbeat mode.
9 `5 ]! D, U# B. f0 P9 ~In a few steps MaxScale will be ready for basic HA operations and one simple failure test, the running process manually killed, is showed as an example.
# F$ Q" ^: l9 nWe make the following assumptions here:
2 m' W' ~1 @6 n3 q4 U- N# VThe solution is a quick setup example that may not be suited for all production environments.. t2 B. Z% Z+ z/ y& J8 k  ~
Pacemaker/Corosync and crmsh command line tools usage is known at a basic level% ~$ r, F) k: k: v/ [
A Virtual IP is set providing the access to the MaxScale process
4 X2 z9 [1 g% l1 A% J; c3 BMaxScale is already configured and working with a MariaDB/MySQL replication setup or MariaDB Galera Cluster' L0 v: Q6 Y6 h( a9 p. V
MaxScale process is started/stopped and monitored via /etc/init.d/maxscale LSB compatible script that is available in RPM package from version 1.0. The script might be found in the GitHub repository, for Ubuntu as well.! P' h* U- ^: x- l0 s
Step 1 - Clustering Software installation
" C+ s& V6 G7 Q. f: T2 cOn each cluster node do the following operations:
$ Y2 p  l" B2 k$ t+ P, [Let’s start enabling  a new repo; D0 j* R+ i$ m. H6 t* J
# vi /etc/yum.repos.d/ha-clustering.repo
: U* ~, [! K- Z6 L' |, r! {and add the following lines to the file  Q; d* [+ }4 a* s% ~
[haclustering]
% L/ g: C* S+ o% C2 ~/ zname=HA Clustering
/ e& `' n2 E( Z# R) m4 A0 K. Y5 |& P/ E' b# |
baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/6 m8 i/ B$ z* I2 e
2 a8 q0 \9 l6 d+ I
enabled=1
' [( k3 y8 c3 n* O4 I" u% C6 g
0 o8 i1 U/ B3 w4 Zgpgcheck=0! C6 g. \  H' _& d$ N+ {+ e
Now install the software.
8 g- _. r) ?: L+ q4 L# yum install pacemaker corosync crmsh
* m; J8 y" I6 x5 M$ C! E  |& _Please note the packages versions used in this setup are:7 l5 y, n7 l7 ?3 ]$ p6 K
pacemaker-1.1.10-14.el6_5.3.x86_646 z; _- ?4 h" j2 T
corosync-1.4.5-2.4.x86_64
; L1 D% b! V& ~, acrmsh-2.0+git46-1.1.x86_647 ?; ^6 I! l. Y  s6 }
Step 2 - Configuring  the system
# W0 F% c4 f1 X) BLet’s begin assigning the hostname to each node:
( L. \3 u4 V5 M6 AThe node names are: node1,node,node3+ F2 y: n; {  a' Q: v
# hostname node14 N5 q8 P, p8 q5 E5 L& k. a
...- h( E+ g1 s' a+ _3 \+ }
# hostname nodeN
3 u8 ^8 ~5 w8 |+ ?8 Y" @( Dand write entries in /etc/hosts:0 B) Z$ k- t7 E; e1 Z1 H5 Q
For each node add the server names and current-node, that is as an alias for the current server.
4 \/ |& j$ U# V* W2 @# vi /etc/hosts
+ t+ q+ ]2 ^/ }- S$ ~6 P% W10.74.14.39     node1
+ P: O/ m! [4 ]: d4 ]$ A10.228.103.72   node21 b: e: D0 }* d) `) L
10.35.15.26     node3 current-node     ...! w9 W8 D5 c2 @; ^) k- j  F
# vi /etc/hosts0 L, R/ N( h& S9 s5 }
10.74.14.39     node1 current-node) L6 N5 L/ u) _* m5 y8 {
10.228.103.72   node2
( F2 ^' M' O! j1 _10.35.15.26     node3
" |/ b, S0 d; p( z+ e8 r1 B/ cPrepare authkey for optional cryptographic use:  J) ], F! W: @
On one of the nodes, say node2 run the corosync-keygen utility and follow the instructions.9 Y4 z# r% i9 q$ ]( i
[root@node2 ~]# corosync-keygen
; U3 V: J9 U$ q  x8 M: b& x, y6 E7 S  F  B* l
Corosync Cluster Engine Authentication key generator., e" V  d) U- ~3 ^% ^5 f# K
       Gathering 1024 bits for key from /dev/random." t& X6 X6 S" }: A( F
       Press keys on your keyboard to generate entropy.
4 o* j% K/ ^( YAfter completion the key will be found in /etc/corosync/authkey.
! X4 h4 r- J2 n1 ]1 i6 G+ ONow let’s create the corosync configuration file:
% o$ i7 s  ^7 ^; Q3 o8 X. L  U2 o[root@node2 ~]# vi /etc/corosync/corosync.conf
1 g; O+ u5 m* x' w( WAdd the following content to the file:2 N8 |: P* H% [) g$ c5 p6 M2 ^
# Please read the corosync.conf.5 manual page& S7 Q# `7 `- ]9 A
compatibility: whitetank: W2 T9 y7 [4 f: y% a3 r: h; t
) ]& P# Z' d$ s6 Q1 n% x
totem {6 o* c  p  R6 s
        version: 2, N- p* p0 h/ {5 z0 G7 A
        secauth: off. Z0 L( }2 B8 X/ p+ u
        interface {8 h- b# n5 ]3 {
                member {! g# |" ]7 x* i+ ^: x8 H7 [
                        memberaddr: node1# x- l3 ^' t- m; V2 n( Y
                }2 J; m3 y7 v0 e
                member {% ]5 Q! X0 b$ x% H7 |0 Z
                        memberaddr: node22 D  K& _6 E& |. W% O2 i( f( d  L
                }6 I# |( j7 a' `% u& T
                member {
4 F8 _  X6 E. Y1 [5 O2 J                        memberaddr: node3! M" ]! |" S  E5 @4 ~3 A  n
                }
( `; R/ ?) ~, d& @             ringnumber: 03 t" w$ }1 ]* i- t
                 bindnetaddr: current-node0 G; S# i; I& e; s* v2 c( z
                 mcastport: 5405% w% `- J" X3 P3 e
                 ttl: 10 b; h" T2 E( P% y: p" Z$ ^
        }5 U2 }) W0 ~. O1 d
        transport: udpu3 k+ v; x* Q* V+ A, w, e0 ?9 u6 u
}' h; G) Z% \2 ^
( G7 }& \! ~( |; N
logging {* H% Z" V+ ~0 ?$ K- T, h7 P& L
        fileline: off7 W3 r% N  H- l1 z+ e9 ?% i
        to_logfile: yes; p  g- w1 ]# Y2 G" ?8 ]
        to_syslog: yes% |/ Y  B: G) P/ F6 V( ]  W/ ~
        logfile: /var/log/cluster/corosync.log% V) t! C: ?8 M. K  t, Q3 r4 @
        debug: off3 f/ q/ r/ j; Z- Z
        timestamp: on
# o; d* D8 p0 C: }, n& K7 v3 y        logger_subsys {# I5 O; v: O6 v; r% A% u8 {
                subsys: AMF
* ~0 v8 a, Q: Y/ I                debug: off- h' w& `4 n! h# T% ~- j
% `$ h# b* o$ }5 Y
        }) r# W( m$ S) v. K9 E
}
  ?. o6 s( y( e3 s7 p; @% r% k  O* C$ n0 s- D1 Y& B8 _
# this will start Pacemaker processes
3 t2 Q; T  a: G8 E! Y. cservice {# p, ?$ x) H- k, W: N7 B% H
ver: 0
5 @$ ^) l7 B, W6 s6 b. \1 \name: pacemaker# D; F; h1 ]0 d
}
5 e2 R# w' R( S2 c# oA few notes here:
* e" x% }7 T. h" ]3 KUnicast UDP is used( i+ h8 a* t3 x. [( k
bindnetaddr for Corosync process is “current-node”, that has the right value on each node due to the alias added in /etc/hosts above
* U) k( r: k$ Y: o/ [Pacemaker processes are started by the Sorosync daemon, so there is no need to launch it via /etc/init.d/pacemaker start' a4 [: ^6 F  w/ e
We can now copy configuration files and auth key on each of the other nodes:
4 B6 y7 X- D; m- J' m9 |; {' c! N- w[root@node2 ~]# scp /etc/corosync/*  root@node1:/etc/corosync/' U6 W* _+ N2 {  R
...
9 ~. _) U! \7 X( Z) S[root@node2 ~]# scp /etc/corosync/*  root@nodeN:/etc/corosync/" x* I1 o/ P: z2 L
Step 3 - Start the Cluster, ]  b8 y7 r* i, g
The Cluster can be started now but let’s do additional checks before proceeding. Corosync needs UDP port 5405 to be opened so we need to configure any firewall or iptables accordingly.* b( Y5 l1 g  P+ r7 U1 }" c
For a quick start just disable iptables on each nodes:2 j& s& M# p& c2 Q6 W
[root@node2 ~]# service iptables stop
$ W$ F8 L# A) k, ~6 |7 E# X( w3 N% A5 V- \
[root@nodeN ~]# service iptables stop% T7 @2 X$ P, ?) A& l& X  Q( W3 s
Let’s start Corosync on each node:3 Z% U$ h" K! q
[root@node2 ~] #/etc/init.d/corosync start
5 Y/ g7 X' F3 S: c# j$ r7 I/ N" c* a  d8 e
[root@nodeN ~] #/etc/init.d/corosync start
" g( c5 v# n" aand check if the corosync daemon is successfully bound to port 5405:
( W  M8 W" U2 j8 n[root@node2 ~] #netstat -na | grep 5405
' Q0 o" |  v$ J9 Z  _& _
; B  d6 b* t9 t0 v- Hudp        0      0 10.228.103.72:5405        0.0.0.0:*2 V& ?; y1 B% {4 \) D
Check also if other nodes are reachable with nc utility and option UDP (-u):7 E/ T5 Q. r& x; f  b9 g
[root@node2 ~] #echo "check ..."  | nc -u node1 54058 e4 n$ t5 f8 Z1 y) w
[root@node2 ~] #echo "check ..."  | nc -u node3 5405
& G6 }) Q* m, ^, ?  I
. {% L6 _& m) {- N: [) X[root@node1 ~] #echo "check ..."  | nc -u node2 5405. q  B! A. L$ S4 ?% j4 L/ V+ K# r
[root@node1 ~] #echo "check ..."  | nc -u node3 5405
- W; J" t, Q! _0 y8 UIf the following message is displayed:
7 y# ~3 a8 y6 Bnc: Write error: Connection refused8 B# W* L' S2 C  n" A; A! v
there is an issue with communication between the nodes, this is most likely to be an issue with the firewall configuration on your nodes.
0 ]7 T) N1 P. r+ B3 [) G. c! W+ kPlease check and resolve issues with your firewall configuration.
( _# I" Z3 K% o! \4 c7 fWe can check the cluster status, from any node, with this command:4 f( `. ~( p: V
[root@node3 ~]# crm status; ~5 @2 h) f8 D3 \* g
After a while the output will look like:
: k+ x4 o) l3 ^/ q1 N[root@node3 ~]# crm status
; W; O- Y  B) {% N- {- OLast updated: Mon Jun 30 12:47:53 2014
1 P2 t2 F: s' K) T* nLast change: Mon Jun 30 12:47:39 2014 via crmd on node2
2 w% _7 P3 ~8 X8 P$ K7 f; J, K6 bStack: classic openais (with plugin), Q, d, @- ^, @/ ]2 x
Current DC: node2 - partition with quorum) T, J6 Z9 ^: z8 n  d" W5 w
Version: 1.1.10-14.el6_5.3-368c726& N, K8 X% n" j8 x" L1 @
3 Nodes configured, 3 expected votes6 [) f  @4 d5 W7 F% f6 T
0 Resources configured6 r6 D, x* j/ R

6 c; N+ z. h/ r$ w7 \0 Y$ B: [! U* K9 A' U. R
Online: [ node1 node2 node3 ]/ a/ a7 ~- G" b) L3 ^# e
The Cluster has been started successfully, that’s the first achievement so far!
; ^4 x" ^% t1 s4 IPlease note, in the basic setup we will disable the following properties:
/ ^/ Y4 V3 J2 O) }& _, Cstonith
" m5 d* C" J! P; O* xquorum policy
0 D3 W6 h# E0 M7 }8 _2 W& x[root@node3 ~]# crm configure property 'stonith-enabled'='false'
& G  N# j; B, ^+ t# z0 w; o[root@node3 ~]# crm configure property 'no-quorum-policy'='ignore'$ V' X" D7 R. p; R# D) M
After these commands the configuration is automatically updated on every node and we want to check it from another node, say node13 B. {+ _# f( g; e
[root@node1 ~]# crm configure show7 t2 t1 @  `; v; W- ?% v
( j. m* a0 h6 {3 F" _& W
node node1  E6 P% i7 e2 t3 J7 x& I9 P
node node2
# ?/ Y* X' N& Y# O/ tnode node3# D% a1 q, I0 V9 k7 w7 X0 Y4 n
property cib-bootstrap-options: \" Z9 L) H: N/ [8 A6 a
        dc-version=1.1.10-14.el6_5.3-368c726 \! H- }6 n5 E1 X. s9 [5 ^
        cluster-infrastructure="classic openais (with plugin)" \
& S& R! h5 Q: u$ e8 a6 Z! n        expected-quorum-votes=3 \: t0 o) V; a$ ~' d7 k4 n# e  v/ ?  V
        stonith-enabled=false \; i. }3 X7 N$ s/ P% b
        no-quorum-policy=ignore \
( j" u4 b: c& \# j9 H, K# M        placement-strategy=balanced \
5 _- ^  n0 ~( _. ?& m4 d        default-resource-stickiness=infinity' N; |& G& w$ e0 C5 }2 g, I
Well done, the Corosync / Pacemaker cluster is now ready to manage resources, in the next steps we’ll add MaxScale.; q: K2 ?1 S, Y# E7 e* c
Step 4 - Check MaxScale init script
, c0 c7 D( e0 |The new MaxScale /etc/init.d./maxscale script allows to start/stop/restart and monitor MaxScale process running in the system.- s* [6 F5 x& }1 W" @
The script found in the RPM package is already working with the following path: /usr/local/skysql/maxscale
4 u) {1 T$ l/ X* |! b( t, EIt might be necessary to modify some variables such as MAXSCALE_HOME to match the installation directory you choose when you installed MaxScale or MAXSCALE_PIDFILE or LD_LIBRARY_PATH* G" Z( c' `) J6 }7 {7 d8 ^
We assume here MaxScale is configured with a MariaDB/MySQL replication setup or MariaDB Galera Cluster and those servers might be located in the three Linux boxes we are using or anywhere else.& ?# |0 z: X  v1 ^$ E* }
Following commands should be issued on each node, assuring the application could run and managed:7 z% W, m- v5 w" w; i
[root@node1 ~]# /etc/init.d/maxscale & B' e6 _. @) P! z: t3 |
Usage: /etc/init.d/maxscale {start|stop|status|restart|condrestart|reload}- }! q4 x$ L9 n9 S+ l
Start6 k/ W6 ^9 X1 K9 p- \9 C. S1 }/ y
[root@node1 ~]# /etc/init.d/maxscale start
4 _! w5 _& b/ G9 V5 z$ V3 FStarting MaxScale: maxscale (pid 25892) is running...      [  OK  ]( l+ u3 |5 z( V. n, B
Start again
" n; Y. m0 O% J4 F1 e[root@node1 ~]# /etc/init.d/maxscale start/ P9 J+ z5 Z: q( s+ \* B& w
Starting MaxScale:  found maxscale (pid  25892) is running.[  OK  ], Q  z! i2 H( E" b( p; `8 H4 o( l
Stop' t7 u1 `: F9 S9 [1 a
[root@node1 ~]# /etc/init.d/maxscale stop
& ?3 N5 g- M1 ?/ I0 T. V$ O% z; `Stopping MaxScale:                                         [  OK  ]
: {2 t/ j5 w9 F' W2 eStop again& j" H: Q: c0 m7 a
[root@node1 ~]# /etc/init.d/maxscale stop% T7 D# G& i3 v, @
Stopping MaxScale:                                         [FAILED]
4 g6 e$ B2 G1 i/ O' ^' KStatus (MaxScale not running)
; [+ ^' @0 q3 w; D( _[root@node1 ~]# /etc/init.d/maxscale status* L2 n9 h( Q2 t- l9 A0 {6 K
MaxScale is stopped                                        [FAILED]
2 E( Q3 }3 w; K% {7 Q2 r" Z- MStatus (MaxScale is running)
9 b7 e4 J6 z' k. Q3 h/ g4 l[root@node1 ~]# /etc/init.d/maxscale status
( Q( J/ x3 z- \* eChecking MaxScale status: MaxScale (pid  25953) is running.[  OK  ]
; t$ ]* d, g* T, O' j$ V& e) @1 wAs MaxScale script is LSB compatible, returns the proper exit code for each action, it’s now possible to configure the application as a resource in Pacemaker, next step will show how to do it.
7 X' r/ o" U* v# yStep 5 - Configure MaxScale as a cluster resource
/ p5 w, ~/ v6 aWe are assuming here MaxScale could run on each node with the same configuration file.4 x0 W# c5 i: u9 ?) c
[root@node2 ~]# crm configure primitive MaxScale lsb:maxscale \1 O2 {# i4 d& }
op monitor interval=”10s” timeout=”15s” \& ~: Y# J  T* V  }5 P
op start interval=”0” timeout=”15s” \
  F# j; t4 \3 j! ^0 Nop stop interval=”0” timeout=”30s”
# [# R/ @1 T$ f) x# lThe command above has configured MaxScale as a LSB resource, note “lsb:maxscale”
8 o5 y1 @% N) x3 \+ E/ LIn Pacemaker there are two different ways for managing applications:+ _; U3 S; `! ~. N& R6 |# x
Resource Agents (VIP, MySQL, Filesystem etc)1 W) V% Q- }- @, A! R4 k
LSB scripts for applications that don’t require the complexity of a resource agent and custom applications, in general.
; o1 Y% W, o  V4 }MaxScale itself manages the backend servers we had configured in etc/MaxScale.cnf service sections such as:
, m% }( M2 Y/ V0 X  E! Q; F6 u: h[RW Split Router]
" N' h! e+ F: d) k6 y. h+ U9 Jtype=service
6 ^/ ]. {6 e/ D! M) drouter=readwritesplit
/ s! p' o5 {+ U" Z# U# ~7 gservers=server1,server2,server3,server4,server5,server6,server7
" ?. `/ }' l; n: V1 Vuser=maxuser
1 u2 h( }6 N: mpasswd=maxpwd8 d+ X( h1 m- q& W1 |$ P( [/ S3 Z
So we only want Pacemaker to manage the MaxScale process and the LSB approach is well suitable here./ N8 \, K& @: }$ ?4 W1 c
If everything is fine we should see the resource running:' O) a# ?, A2 I* G+ V" I' q, _
[root@node2 ~]# crm status
" f( k$ Y0 v" M$ u- X9 DLast updated: Mon Jun 30 13:15:34 2014
$ H5 I) J8 m. Q6 f6 E5 ULast change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
& o. F$ [( @' y" |Stack: classic openais (with plugin)( F0 |% `- G, B0 o" m
Current DC: node2 - partition with quorum
. n5 r# E$ g, h) t9 `Version: 1.1.10-14.el6_5.3-368c726
, N, O2 j3 [3 y* f7 X3 p3 Nodes configured, 3 expected votes7 A* o- ?, i; B1 z0 C& \, y6 n
1 Resources configured
9 Y( |7 u, z& S4 W! J5 Y9 B8 m* U/ a+ e# c/ c
Online: [ node1 node2 node3 ]
6 J  ?. t- s. g4 H' b$ E: m
, z7 H( d3 X( S/ i( ^8 QMaxScale        (lsb:maxscale):        Started node10 E3 \  T6 N. n) O
Well done, another achievement here!. B9 S9 I. i8 c8 V" G' f
We now have MaxScale running via Pacemaker and we don’t need anymore to have it started via /etc/init.d at boot time! Pacemaker will do all the job but it needs to be started at boot: with CentOS 6.5 setup we need at least:
! w8 `5 F( U7 r  j, c) Q( T# U- [# chkconfig maxscale off
4 E; `$ t6 k/ N0 ]6 C9 T# chkconfig corosync on
# \% t' Q6 C1 nStep 6 - Does the HA software work? Let’s see a resource restarted after a failure:; N8 }+ {+ ~4 s
MaxScale application is now managed by the HA clustering software but what does it mean?- Y/ o8 a& }! l* w" H3 T) k8 `
Will the application be restarted in case of any failure? It should be!$ u7 l7 k  p6 w7 F: Z6 t
We try now to kill the MaxScale process and see what will happen ...
8 S  z) }% u) h* `, {' V8 V/ mAs we now MaxScale PID could be easily found in $MAXSCALE_HOME/log/maxscale.pid
" E- z: T% `% i/ U( C, [In this example the PID is 26114, and we kill the process with brute force:
  Y8 U% O- K7 r, B[root@node2 ~]# kill -9 26114$ x* l! m& z) p% s
6 ^: N9 c2 T$ `# u. s. b
[root@node2 ~]# crm status) c5 z. a" Z0 Y/ ^
Last updated: Mon Jun 30 13:16:11 2014
, O$ v: u' y3 C2 i. g3 xLast change: Mon Jun 30 13:15:28 2014 via cibadmin on node2
5 h* J* H+ E: i- {7 O. X; WStack: classic openais (with plugin)! s' Z. `0 W4 Z  `8 O
Current DC: node2 - partition with quorum
. j' B9 V; a& ?$ @- ?1 v# QVersion: 1.1.10-14.el6_5.3-368c726
$ i4 T  i/ |7 K7 C+ k" u3 Nodes configured, 3 expected votes
0 J1 \$ x. u# f1 Resources configured7 B2 O: i% \5 i! b

1 q/ C' ~. ]5 }5 k6 H! \, c) rOnline: [ node1 node2 node3 ]* Y6 z9 P! D. `. ~1 \

% D7 t  {# |+ y  T* RFailed actions:- b5 I9 P$ ]! l' P8 H* ^
    MaxScale_monitor_15000 on node1 'not running' (7): call=19, status=complete, last-rc-change='Mon Jun 30 13:16:14 2014', queued=0ms, exec=0ms
8 J# v$ D6 c( P5 F9 K( wNote the MaxScale_monitor failed action above and ... after a few seconds it will be started again:/ H7 D9 Q4 W; u2 g+ `, _, m2 A& N
[root@node2 ~]# crm status
4 K; k1 d$ A7 p1 m( L( a! ZLast updated: Mon Jun 30 13:16:22 2014
, s" W  ~$ B' a" @Last change: Mon Jun 30 13:15:28 2014 via cibadmin on node1
# z$ B& `7 }- ?Stack: classic openais (with plugin)3 Y! q/ F5 d& G9 z# b( J# R" v0 l
Current DC: node2 - partition with quorum; z0 L1 a+ V. o
Version: 1.1.10-14.el6_5.3-368c726+ g+ B' L$ x$ ~
3 Nodes configured, 3 expected votes
0 x/ D9 L9 f& H- G5 |9 J# Y' m1 Resources configured
+ z2 O# d1 C* K) Z1 \8 y* p
* I; Q1 z8 e9 t/ y4 [
. c: x- b% [9 J1 I" k1 ]Online: [ node1 node2 node3 ]/ h9 Q- ]2 V; x

0 s% p% a' A4 I$ R+ @# ~! \ MaxScale        (lsb:maxscale):        Started node1
9 `- P. g7 i2 |" U! mThe Clustering HA software will keep MaxScale running in one of the three Linux boxes we have but … which node? and how could we connect to MaxScale from our client application if we don’t know where it runs?
. q9 W: G' B! A; Z  b# V2 ^# mysql -h $MAXSCALE_IP -P 4006 -utest -p test, v; p( L: Z1 h9 y, q  P5 t# z6 b3 C( R5 x
What is the $MAXSCALE_IP then? Let’s Follow the last step ...5 ?+ g0 t9 T7 n! e; c
Step 7 - Add a Virtual IP (VIP) to the cluster
3 T: \; i2 w, IThe solution for $MAXSCALE_IP is that MaxScale process should be contacted using one known IP, that may move across nodes with MaxScale as well.  e( w6 {+ p7 A9 R
The setup is very easy: assuming an addition IP address is available and that it can be added to one of the nodes, this i the new configuration to add:2 K& [* Q4 o* D: [( E$ F8 g
[root@node2 ~]# crm configure primitive maxscale_vip ocf:heartbeat:IPaddr2 params ip=192.168.122.125 op monitor interval=10s
2 P% P$ _1 V. D; S; H0 yThere is of course another action to do: MaxScale process and the VIP must be run in the same node, so it’s mandatory to add to the configuration the group ‘maxscale_service’.
0 x! e2 j) x* _: F8 @[root@node2 ~]# crm configure group maxscale_service maxscale_vip MaxScale% s8 J! q9 c+ X% Z; `' \
Here is the final configuration:; D: U  _4 o- |7 X3 k/ S6 X, S
[root@node3 ~]# crm configure show
- j3 O  Y" e: ]: K5 unode node1
; a! ]2 X& X. v! B3 b& N( bnode node23 N2 Q1 J6 D) [- y
node node3* w+ }/ S4 W' y6 Y4 S: [
primitive MaxScale lsb:maxscale \  G5 t; C8 N, I% m
        op monitor interval=15s timeout=10s \) |' Y+ [3 q' z# g  X- Z& A
        op start interval=0 timeout=15s \* E6 S, \# C. B, n2 ?2 w+ h" K
        op stop interval=0 timeout=30s2 u+ e$ f5 k: r; ~7 w
primitive maxscale_vip IPaddr2 \
" h5 C  P7 R6 A, b        params ip=192.168.122.125 \
! U4 I7 P$ V* M0 i) S        op monitor interval=10s! ~; O* w& g2 k* g! O
group maxscale_service maxscale_vip MaxScale \9 y3 a& |2 ?0 u/ z
        meta target-role=Started
$ |9 J5 A) ~7 _3 c: yproperty cib-bootstrap-options: \
0 V0 N% i3 j# Z2 y/ n        dc-version=1.1.10-14.el6_5.3-368c726 \. i# S8 h) ~: I0 h$ h! S! J
        cluster-infrastructure="classic openais (with plugin)" \
' U' k4 J4 |. ?) D' G        expected-quorum-votes=3 \
# ~) r5 ^) I) a9 e0 e        stonith-enabled=false \
8 b, m' v: `. L% f0 j        no-quorum-policy=ignore \# Q0 a3 L8 Z# y  u/ G
        placement-strategy=balanced \5 H7 N2 D, I; Y) Q
        last-lrm-refresh=14041254868 [2 z% I, Q' P3 f. U( a' r! l; M
Check the resource status:
1 v; C8 B! v; ~. n4 K& j0 z/ d[root@node1 ~]# crm status& f& H) z4 y- Z3 V% \) t
Last updated: Mon Jun 30 13:51:29 2014
5 h0 L+ T8 I8 O5 X+ gLast change: Mon Jun 30 13:51:27 2014 via crmd on node1
0 F+ P) E$ ~; }  O: t# hStack: classic openais (with plugin)
4 w4 U' l7 m/ j4 ICurrent DC: node2 - partition with quorum
8 }8 ~* `# r% O9 i& w% ?Version: 1.1.10-14.el6_5.3-368c726% ]' X3 J, O: z% \) j% x
3 Nodes configured, 3 expected votes
! N9 j& l/ n$ k' {' x0 I( f$ z2 Resources configured
$ R; l) f) e0 c& Y& v+ y% R; M4 {% F* c/ \5 S+ P
Online: [ node1 node2 node3 ]- C4 E' q& Q5 b& u$ U! b

, _/ _8 ?; j% P9 T: U: S Resource Group: maxscale_service
  W  @2 }% v4 |% Y$ a! S8 J     maxscale_vip        (ocf::heartbeat:IPaddr2):        Started node2 : P- G. _6 @/ }/ |
     MaxScale        (lsb:maxscale):        Started node2 + ]: D8 {$ q: r
With both resources on node2, now MaxScale service will be reachable via the configured VIP address 192.168.122.125:( l4 }3 M+ {% L* p6 {
# mysql -h 192.168.122.125 -P 4006 -utest -p test3 @) z! N3 H. h
Please note our three Linux boxes setup require now four IP addresses: one for each node plus the moving IP address assigned to MaxScale3 X/ d( P* a2 w1 d
Summary4 u; x! I1 {# F. Z
The goal of this post was to present a quick HA solution for a running MaxScale setup, using a widely adopted open-source clustering solution.
3 R3 d; j! ~) W* R# _Even though the main content could be seen as a basic Corosync/Pacemaker setup guide, I encourage you to look for other failure scenarios and all the cluster administrative commands such as moving resources, adding constraints that could be found through the links below.; E' ]+ H8 }0 a% N
The reader might fin the LSB script tutorials interesting too, just enabling another application to the HA side,
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-8 21:28 , Processed in 0.053011 second(s), 22 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表