工作原理:
第一步:manager节点监控主从数据库集群中节点的运行状态,当主节点故障时,manager节点会尽可能的获取已经故障的主节点的二进制日志,并存储到自己的工作目录中
第二步:manager在从节点中找出中继日志最新的这台节点,并将该节点的中继日志传给其他从节点,实现从节点之间的数据一致
第三步:当所有从节点数据保持一致时,manager节点根据配置选出新的主节点,并将自己从故障的节点中获取二进制日志交给这个新的主节点
第四步:所有的从节点都将指定这个新的主节点,新主节点关于故障节点的信息全部清除

服务器信息

IP 角色
172.16.1.12 MySQL主
172.16.1.13 MySQL备1
172.16.1.14 MySQL备2
172.16.1.15 Manager

MySQL主配置

修改配置文件

1
2
3
4
vi /etc/my.cnf
添加:(在mysqld模块中)
log-bin = mysql-bin //开启二进制日志
server-id = 1

重启服务

1
service mysqld restart

放行防火墙

1
2
3
4
firewall-cmd --add-port=3306/tcp --permanent
firewall-cmd --add-service=mysql --permanent
firewall-cmd --reload
setenforce 0

给root用户设置登录密码

1
mysqladmin -u root password as1a2nf3

登录mysql授权

给与slave用户replication读取权限

1
2
3
4
5
6
7
8
9
10
11
mysql -uroot -pas1a2nf3
mysql> grant replication slave on *.* to 'slave'@'172.16.1.%' identified by 'as1a2nf3';
mysql> grant all on *.* to "mha"@"172.16.1.%" identified by "as1a2nf3";
mysql> flush privileges;
mysql> show master status;
+-------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+-------------------+----------+--------------+------------------+-------------------+
| master-bin.000001 | 546 | | | |
+-------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

MySQL备1配置

修改配置文件

1
2
3
4
5
6
7
vi /etc/my.cnf
添加:
server_id = 2 //标识
log-bin = mysql-bin //开启二进制日志
relay-log = relay-log-bin //开启中继日志
relay_log_purge=0 //不自动清除中继日志
read_only=1 //只读

重启服务

1
service mysqld restart

放行防火墙

1
2
3
4
firewall-cmd --add-port=3306/tcp --permanent
firewall-cmd --add-service=mysql --permanent
firewall-cmd --reload
setenforce 0

给root用户设置登录密码

1
mysqladmin -u root password as1a2nf3

登录数据库授权

1
2
3
4
5
6
7
8
9
mysql -uroot -pas1a2nf3
change master to master_host="172.16.1.12",master_user="slave",master_password="as1a2nf3",master_log_file="mysql-bin.000001",master_log_pos=546;
start slave;
show slave status\G # 出现以下则成功
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
mysql> grant replication slave on *.* to 'slave'@'172.16.1.%' identified by 'as1a2nf3';
mysql> grant all on *.* to "mha"@"172.16.1.%" identified by "as1a2nf3";
mysql> flush privileges;

MySQL备2配置

修改配置文件

1
2
3
4
5
6
7
vi /etc/my.cnf
添加:(mysqld模块中)
server_id = 3
log-bin = mysql-bin
relay-log = relay-log-bin
relay_log_purge=0
read_only=1

重启服务

1
service mysqld restart

放行防火墙

1
2
3
4
firewall-cmd --add-port=3306/tcp --permanent
firewall-cmd --add-service=mysql --permanent
firewall-cmd --reload
setenforce 0

给root用户设置登录密码

1
mysqladmin -u root password as1a2nf3

登录数据库授权

1
2
3
4
5
6
7
8
9
mysql -uroot -pas1a2nf3
change master to master_host="172.16.1.12",master_user="slave",master_password="as1a2nf3",master_log_file="mysql-bin.000001",master_log_pos=546;
start slave;
show slave status\G # 出现以下则成功
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
mysql> grant replication slave on *.* to 'slave'@'172.16.1.%' identified by 'as1a2nf3';
mysql> grant all on *.* to "mha"@"172.16.1.%" identified by "as1a2nf3";
mysql> flush privileges;

配置MHA

免密登录

四台节点之间配置ssh免密登录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@localhost ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:uNeSnO2vW67XhgcBMf0xTaRwxP9uU+jk1qxjS7rRxtU root@localhost
The key's randomart image is:
+---[RSA 2048]----+
| oo.oo+o|
| ...o+..|
| . ..+ |
| . . . o|
| . S . .E|
| o = .oo.o|
| . * o oB=+.|
| . o oo=X.=|
| =*=*o=.|
+----[SHA256]-----+
# 传送时输入正确密码即可
[root@localhost ~]# ssh-copy-id root@172.16.1.12
[root@localhost ~]# ssh-copy-id root@172.16.1.13
[root@localhost ~]# ssh-copy-id root@172.16.1.14

安装依赖

1
2
3
4
#所有节点执行
yum -y install perl-DBD-mysql perl-DBI
rpm -ivh epel-release-latest-7.noarch.rpm
rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm

manager节点安装依赖

1
2
yum -y install perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

编写全局配置文件

1
2
3
4
5
6
7
8
9
10
11
vi /etc/masterha_default.cnf
[server default]
user=mha //经过数据库授权的给管理节点的管理用户
password=as1a2nf3 //经过数据库授权的给管理节点的管理用户的密码
ssh_user=root //管理节点与节点之间使用的ssh用户
repl_user=slave //设置复制环境中的复制用户名
repl_password=as1a2nf3 //设置复制用户的密码
master_binlog_dir=/usr/local/mysql/data //设置master保存binlog日志文件的位置
remote_workdir=/data/master //设置远端master在发生切换时binlog的保存本地位置
ping_interval=2 //设置监控主库,发送ping包的时间间隔,默认3秒,尝试三次没有回应则认为主库宕机。
shutdown_script="" //当mysql主宕机时执行的脚本,可以为空

编写管理者节点配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
mkdir /etc/mha
vi /etc/mha/app1.cnf
[server default]
manager_workdir=/data/manager/
manager_log=/data/manager/manager.log

[server1]
hostname=172.16.1.12
port=3306
ssh_port=22

[server2]
hostname=172.16.1.13
port=3306
ssh_port=22
candidate_master=1 //表示mysql主宕机后,有server2的节点来接替主

[server3]
hostname=1.72.16.1.14
port=3306
ssh_port=22

检查节点间的ssh通信是否正常

1
2
masterha_check_ssh --global-conf=/etc/masterha_default.cnf --conf=/etc/mha/app1.cnf
# 最后输出successfully即可!

测试数据库节点间主从复制是否正常

1
2
masterha_check_repl --global-conf=/etc/masterha_default.cnf --conf=/etc/mha/app1.cnf
# 最后输出OK即可!

开启MHA

1
2
masterha_manager --global-conf=/etc/masterha_default.cnf --conf=/etc/mha/app1.cnf --ignore_last_failover
# 阻塞状态

验证

1
2
# 将主mysql宕了验证故障转移是否成功	
在从节点-B中查看从状态