MySQL - building of MMM high availability cluster (whole process, pure dry goods ~ ~)

This article mainly introduces the MMM scheme and the principle of MMM architecture. The MMM scheme is not suitable for services with high requirements for data consistency. Let's study together.

First, let's look at the specific architecture topology, as follows:

Among them,

role Hosts IP address apply property VIP
master1 master 192.168.142.135 write 192.168.142.188
master2 backup 192.168.142.132 write|read 192.168.142.188
slave1 slave 192.168.142.136 read 192.168.142.200, 192.168.142.210
slave2 slave 192.168.142.137 read 192.168.142.200, 192.168.142.210
monitor monitor 192.168.142.143 moitor No VIP

Specific configuration steps

I. experimental environment configuration

Configure ALI cloud source (skip if installed)

Here we take master1 as an example. In fact, all five of them need to be installed

[root@master1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@master1 ~]# yum -y install epel-release
[root@master1 ~]# yum clean all && yum makecache

Install MySQL for all primary and secondary servers, and configure

[root@master1 ~]# yum install mysql -y
[root@master1 ~]# vim /etc/my.cnf      #Modify profile
    ##server id cannot be the same
    server-id =
    log-bin = zhu-bin
    log-slave-updates = ture    ##Turn on master-slave synchronization
    sync_binlog = 1   ##Binary write now
    auto_increment_increment=2           ###The increment is 2.   
    auto_increment_offset=1               ######  Starting at 1
[root@master1 ~]# systemctl restart mysqld
[root@master1 ~]# systemctl enable mysqld

2. Configure primary and primary synchronization (two primary servers replicate and level with each other)

m1 to m2 slave, m2 to m1 slave

[root@master1 ~]# mysql -u root -p
grant replication slave on *.* to 'myslave'@'192.168.142.%' identified by 'asd123';
    #Allow the slave server to use the myslave account to copy on the master server (both masters should be authorized)
show master status;
+------------------+----------+--------------+--------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         |
+------------------+----------+--------------+--------------------------+
| mysql_bin.000001 |      245 |              | mysql,information_schema |
+------------------+----------+--------------+--------------------------+
1 row in set (0.00 sec)
    #View the information as the primary server. The key point is the position number
change master to master_host='Primary server address',master_user='myslave',master_password='asd123',master_log_file='File',master_log_pos=position number;
    ##Grant permission (m1 is the master of m2, m2 is the master of m1)
start slave;
show slave status\G
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    #Check whether the information as the slave server is turned on synchronously

III. master-slave synchronization (two slaves refer to the same master)

Configure two servers to point to master1

[root@slave1 ~]# vim /etc/my.cnf
    server-id = 6    #Can't have the same
    log-bin=mysql-bin
    log-slave-updates=ture
    sync_binlog = 1
[root@slave1 ~]# systemctl restart mysqld
[root@slave1 ~]# netstat -atnp | grep 3306
tcp6       0      0 :::3306                 :::*                    LISTEN      38831/mysqld

#Lift the weight and point to master1 (the two slave operations are the same!!!!!)
[root@slave1 ~]# mysql -uroot -p
change master to master_host='Primary server address',master_user='myslave',master_password='asd123',master_log_file='Information viewed by the primary server',master_log_pos=number;
    ##Grant permission (slave 1 and slave 2 are masters 1)
start slave;
show slave status\G
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    #Check whether the information as the slave server is turned on synchronously

IV. install and configure MMM for all servers

Modify the MMM common configuration file

[root@master1 ~]# yum -y install mysql-mmm*

#Modify MMM profile
[root@master1 ~]# vim /etc/mysql-mmm/mmm_common.conf
cluster_interface       ens33
......
replication_user        myslave     #Access master account from server
replication_password    asd123
agent_user              mmm_user    #MMM user account
agent_password          123123

<host db1>
    ip      192.168.142.135        #IP address of master1
    mode    master
    peer    db2
</host>

<host db2>
    ip      192.168.142.132        #IP address of master2
    mode    master
    peer    db1
</host>

<host db3>
    ip      192.168.142.136        #IP address of slave1
    mode    slave
</host>

<host db4>
    ip      192.168.142.137        #IP address of slave2
    mode    slave
</host>

<role writer>
    hosts   db1, db2
    ips     192.168.142.188       ###VIP virtual IP address
    mode    exclusive
</role>

<role reader>
    hosts   db3, db4
    ips     192.168.142.200, 192.168.142.210      ###VIP virtual IP address
    mode    balanced
</role>

#scp remote replication, remote push will be configured
[root@master1 ~]# scp /etc/mysql-mmm/mmm_common.conf root@192.168.142.132:/etc/mysql-mmm/
[root@master1 ~]# scp /etc/mysql-mmm/mmm_common.conf root@192.168.142.136:/etc/mysql-mmm/
[root@master1 ~]# scp /etc/mysql-mmm/mmm_common.conf root@192.168.142.137:/etc/mysql-mmm/
[root@master1 ~]# scp /etc/mysql-mmm/mmm_common.conf root@192.168.142.143:/etc/mysql-mmm/

Modify the MMM monitor configuration file (monitor listener only)

[root@localhost ~]# vim /etc/mysql-mmm/mmm_mon.conf
<monitor>
    ......
    ping_ips            192.168.142.135,192.168.142.132,192.168.142.136,192.168.142.137     #All primary and secondary server IP addresses
    auto_set_online     5
    #Waiting for online time

<host default>
    monitor_user        mmm_monitor     ##Create monitoring user
    monitor_password    123123
</host>

Rights shall be raised in all master-slave databases (MMM users, monitoring users)

[root@master1 ~]# mysql -u root -p
grant super,replication client,process on *.* to 'mmm_user'@'192.168.142.%' identified by '123123';
grant replication client on *.* to 'mmm_monitor'@'192.168.142.%' identified by '123123';
flush privileges
##Every master and slave should raise their rights

Modify the ant configuration file (every master and slave should be modified)

Location / etc / MySQL MMM / mmm_agent.conf

[root@master1 ~]# vim /etc/mysql-mmm/mmm_agent.conf
this db1(Corresponding roles)

V. open service

All master and slave servers start MMM agent service

[root@master1 ~]# systemctl start mysql-mmm-agent.service 
[root@master1 ~]# systemctl enable mysql-mmm-agent.service 

Start MMM monitor service at monitoring end

[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl start mysql-mmm-monitor.service
[root@localhost ~]# mmm_control show   ##View monitoring status

Vi. verification results

[root@localhost mysql-mmm]# mmm_control show
  db1(192.168.142.135) master/ONLINE. Roles: writer(192.168.18.250)
  db2(192.168.142.132) master/ONLINE. Roles: 
  db3(192.168.142.136) slave/ONLINE. Roles: reader(192.168.18.251)
  db4(192.168.142.137) slave/ONLINE. Roles: reader(192.168.18.252)

//Use the command to adjust the virtual IP switch to master2:
[root@localhost mysql-mmm]# mmm_control move_role writer db2
OK: Role 'writer' has been moved from 'db1' to 'db2'. Now you can wait some time and check new roles info!
[root@localhost mysql-mmm]# mmm_control show
  db1(192.168.142.135) master/ONLINE. Roles: 
  db2(192.168.142.132) master/ONLINE. Roles: writer(192.168.18.250)
  db3(192.168.142.136) slave/ONLINE. Roles: reader(192.168.18.251)
  db4(192.168.142.137) slave/ONLINE. Roles: reader(192.168.18.252)

//Check whether all States are normal:
[root@localhost mysql-mmm]# mmm_control checks all
db4  ping         [last change: 2019/11/25 15:25:54]  OK
db4  mysql        [last change: 2019/11/25 15:25:54]  OK
db4  rep_threads  [last change: 2019/11/25 15:25:54]  OK
db4  rep_backlog  [last change: 2019/11/25 15:25:54]  OK: Backlog is null
db2  ping         [last change: 2019/11/25 15:25:54]  OK
db2  mysql        [last change: 2019/11/25 15:25:54]  OK
db2  rep_threads  [last change: 2019/11/25 15:25:54]  OK
db2  rep_backlog  [last change: 2019/11/25 15:25:54]  OK: Backlog is null
db3  ping         [last change: 2019/11/25 15:25:54]  OK
db3  mysql        [last change: 2019/11/25 15:25:54]  OK
db3  rep_threads  [last change: 2019/11/25 15:25:54]  OK
db3  rep_backlog  [last change: 2019/11/25 15:25:54]  OK: Backlog is null
db1  ping         [last change: 2019/11/25 15:25:54]  OK
db1  mysql        [last change: 2019/11/25 15:25:54]  OK
db1  rep_threads  [last change: 2019/11/25 15:25:54]  OK
db1  rep_backlog  [last change: 2019/11/25 15:25:54]  OK: Backlog is null

The above is the whole process of MMM cluster construction, thank you for reading!!

Tags: MySQL yum vim CentOS

Posted on Wed, 04 Dec 2019 14:12:10 -0800 by amma