四川电子商务网站,html5网站建设中,淘宝客的网站是自己做的吗,建站之星网站模板商城写在前面业界对系统的高可用有着基本的要求#xff0c;简单的说#xff0c;这些要求可以总结为如下所示。系统架构中不存在单点问题。可以最大限度的保障服务的可用性。一般情况下系统的高可用可以用几个9来评估。所谓的几个9就是系统可以保证对外提供的服务的时间达到总时间…写在前面业界对系统的高可用有着基本的要求简单的说这些要求可以总结为如下所示。系统架构中不存在单点问题。可以最大限度的保障服务的可用性。一般情况下系统的高可用可以用几个9来评估。所谓的几个9就是系统可以保证对外提供的服务的时间达到总时间的百分比。例如如果需要达到99.99的高可用则系统全年发生故障的总时间不能超过52分钟。系统高可用架构我们既然需要实现系统的高可用架构那么我们到底需要搭建一个什么样的系统架构呢我们可以将需要搭建的系统架构简化成下图所示。服务器规划由于我电脑资源有限我这里在4台服务器上搭建高可用环境大家可以按照本文将环境扩展到更多的服务器搭建步骤都是一样的。主机名IP地址安装的服务binghe151192.168.175.151Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetdbinghe152192.168.175.152Zookeeper、MySQLbinghe153192.168.175.153Zookeeper、MySQLbinghe154192.168.175.154Mycat、MySQL、HAProxy、Keepalived、Xinetdbinghe155192.168.175.155MySQL注意HAProxy和Keepalived最好和Mycat部署在同一台服务器上。安装MySQL安装JDK由于Mycat和Zookeeper的运行需要JDK环境的支持所有我们需要在每台服务器上安装JDK环境。这里我以在binghe151服务器上安装JDK为例其他服务器的安装方式与在binghe151服务器上的安装方式相同。安装步骤如下所示。注我下载的JDK安装包版本为jdk-8u212-linux-x64.tar.gz如果JDK版本已更新大家下载对应的版本即可。(2)将下载的jdk-8u212-linux-x64.tar.gz安装包上传到binghe151服务器的/usr/local/src目录下。(3)解压jdk-8u212-linux-x64.tar.gz文件如下所示。tar -zxvf jdk-8u212-linux-x64.tar.gz(4)将解压的jdk1.8.0_212目录移动到binghe151服务器下的/usr/local目录下如下所示。mv jdk1.8.0_212/ /usr/local/src/(5)配置JDK系统环境变量如下所示。vim /etc/profileJAVA_HOME/usr/local/jdk1.8.0_212CLASS_PATH.:$JAVA_HOME/libPATH$JAVA_HOME/bin:$PATHexport JAVA_HOME CLASS_PATH PATH使系统环境变量生效如下所示。source /etc/profile(6)查看JDK版本如下所示。[rootbinghe151 ~]# java -versionjava version 1.8.0_212Java(TM) SE Runtime Environment (build 1.8.0_212-b10)Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)结果显示正确输出了JDK的版本信息说明JDK安装成功。安装Mycat下载Mycat 1.6.7.4 Release版本解压到服务器的/usr/local/mycat目录下并配置Mycat的系统环境变量随后配置Mycat的配置文件Mycat的最终结果配置如下所示。schema.xmlwriteType0 dbTypemysql dbDrivernative switchType1 slaveThreshold100select user()writeType0 dbTypemysql dbDrivernative switchType1 slaveThreshold100select user()writeType0 dbTypemysql dbDrivernative switchType1 slaveThreshold100select user()writeType0 dbTypemysql dbDrivernative switchType1 slaveThreshold100select user()server.xml1druidparser3307330800.0.0.0utf8mb420482218000003000011000104857600io.mycat.server.interceptor.impl.StatisticsSqlInterceptorUPDATE,DELETE,INSERT/tmp/sql.txttruetrue1cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYAshoprule.xmlcustomer_idmod-long4sequence_db_conf.properties#sequence stored in datanodeGLOBALmycatORDER_MASTERmycatORDER_DETAILmycat关于Mycat的配置仅供大家参考大家不一定非要按照我这里配置根据自身业务需要配置即可。本文的重点是实现Mycat的高可用环境搭建。在MySQL中创建Mycat连接MySQL的账户如下所示。CREATE USER mycat192.168.175.% IDENTIFIED BY mycat;ALTER USER mycat192.168.175.% IDENTIFIED WITH mysql_native_password BY mycat;GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO mycat192.168.175.%;FLUSH PRIVILEGES;安装Zookeeper集群安装配置完JDK后就需要搭建Zookeeper集群了根据对服务器的规划现将Zookeeper集群搭建在“binghe151”、“binghe152”、“binghe153”三台服务器上。1.下载Zookeeper到Apache官网去下载Zookeeper的安装包Zookeeper的安装包下载地址为https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/。具体如下图所示。也可以在binghe151服务器上执行如下命令直接下载zookeeper-3.5.5。wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz执行上述命令就可以直接把apache-zookeeper-3.5.5-bin.tar.gz安装包下载到binghe151服务器上。2.安装并配置Zookeeper注意(1)、(2)、(3)步都是在binghe152服务器上执行的。(1)解压Zookeeper安装包在binghe151服务器上执行如下命令将Zookeeper解压到“/usr/local/”目录下并将Zookeeper目录修改为zookeeper-3.5.5。tar -zxvf apache-zookeeper-3.5.5-bin.tar.gzmv apache-zookeeper-3.5.5-bin zookeeper-3.5.5(2)配置Zookeeper系统环境变量同样需要在/etc/profile文件中配置Zookeeper系统环境变量如下ZOOKEEPER_HOME/usr/local/zookeeper-3.5.5PATH$ZOOKEEPER_HOME/bin:$PATHexport ZOOKEEPER_HOME PATH结合之前配置的JDK系统环境变量/etc/profile总体配置如下MYSQL_HOME/usr/local/mysqlJAVA_HOME/usr/local/jdk1.8.0_212MYCAT_HOME/usr/local/mycatZOOKEEPER_HOME/usr/local/zookeeper-3.5.5MPC_HOME/usr/local/mpc-1.1.0GMP_HOME/usr/local/gmp-6.1.2MPFR_HOME/usr/local/mpfr-4.0.2CLASS_PATH.:$JAVA_HOME/libLD_LIBRARY_PATH$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATHPATH$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATHexport JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH(3)配置Zookeeper首先需要将$ZOOKEEPER_HOME/conf($ZOOKEEPER_HOME为Zookeeper的安装目录)目录下的zoo_sample.cfg文件修改为zoo.cfg文件。具体命令如下cd /usr/local/zookeeper-3.5.5/conf/mv zoo_sample.cfg zoo.cfg接下来修改zoo.cfg文件修改后的具体内容如下tickTime2000initLimit10syncLimit5dataDir/usr/local/zookeeper-3.5.5/datadataLogDir/usr/local/zookeeper-3.5.5/dataLogclientPort2181server.1binghe151:2888:3888server.2binghe152:2888:3888server.3binghe153:2888:3888在Zookeeper的安装目录下创建data和dataLog两个文件夹。mkdir -p /usr/local/zookeeper-3.5.5/datamkdir -p /usr/local/zookeeper-3.5.5/dataLog切换到新建的data目录下创建myid文件具体内容为数字1如下所示cd /usr/local/zookeeper-3.5.5/datavim myid将数字1写入到文件myid。3.将Zookeeper和系统环境变量文件复制到其他服务器注意(1)、(2)步是在binghe151服务器上执行的。(1)复制Zookeeper到其他服务器根据对服务器的规划现将Zookeeper复制到binghe152和binghe53服务器具体执行操作如下所示scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/(2)复制系统环境变量文件到其他服务器根据对服务器的规划现将系统环境变量文件/etc/profile复制到binghe152、binghe153服务器具体执行操作如下所示scp /etc/profile binghe152:/etc/scp /etc/profile binghe153:/etc/上述操作可能会要求输入密码根据提示输入密码即可。4.修改其他服务器上的myid文件修改binghe152服务器上Zookeeper的myid文件内容为数字2同时修改binghe153服务器上Zookeeper的myid文件内容为数字3。具体如下在binghe152服务器上执行如下操作echo 2 /usr/local/zookeeper-3.5.5/data/myidcat /usr/local/zookeeper-3.5.5/data/myid2在binghe153服务器上执行如下操作echo 3 /usr/local/zookeeper-3.5.5/data/myidcat /usr/local/zookeeper-3.5.5/data/myid35.使环境变量生效分别在binghe151、binghe152、binghe153上执行如下操作使系统环境变量生效。source /etc/profile6.启动Zookeeper集群分别在binghe151、binghe152、binghe153上执行如下操作启动Zookeeper集群。zkServer.sh start7.查看Zookeeper集群的启动状态binghe151服务器[rootbinghe151 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: followerbinghe152服务器[rootbinghe152 local]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: leaderbinghe153服务器[rootbinghe153 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: follower可以看到binghe151和binghe153服务器上的Zookeeper角色为followerbinghe152服务器上的Zookeeper角色为leader。初始化Mycat配置到Zookeeper集群注意初始化Zookeeper中的数据是在binghe151服务器上进行的原因是之前我们已经在binghe151服务器上安装了Mycat。1.查看初始化脚本在Mycat安装目录下的bin目录中提供了一个init_zk_data.sh脚本文件如下所示。[rootbinghe151 ~]# ll /usr/local/mycat/bin/total 384-rwxr-xr-x 1 root root 3658 Feb 26 17:10 dataMigrate.sh-rwxr-xr-x 1 root root 1272 Feb 26 17:10 init_zk_data.sh-rwxr-xr-x 1 root root 15701 Feb 28 20:51 mycat-rwxr-xr-x 1 root root 2986 Feb 26 17:10 rehash.sh-rwxr-xr-x 1 root root 2526 Feb 26 17:10 startup_nowrap.sh-rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64-rwxr-xr-x 1 root root 99401 Feb 28 20:51 wrapper-linux-x86-32-rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64init_zk_data.sh脚本文件就是用来向Zookeeper中初始化Mycat的配置的这个文件会通过读取Mycat安装目录下的conf目录下的配置文件将其初始化到Zookeeper集群中。2.复制Mycat配置文件首先我们查看下Mycat安装目录下的conf目录下的文件信息如下所示。[rootbinghe151 ~]# cd /usr/local/mycat/conf/[rootbinghe151 conf]# lltotal 108-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.confdrwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconfdrwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload接下来将Mycat安装目录下的conf目录下的schema.xml文件、server.xml文件、rule.xml文件和sequence_db_conf.properties文件复制到conf目录下的zkconf目录下如下所示。cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/3.将Mycat配置信息写入Zookeeper集群执行init_zk_data.sh脚本文件向Zookeeper集群中初始化配置信息如下所示。[rootbinghe151 bin]# /usr/local/mycat/bin/init_zk_data.sho2020-03-08 20:03:13 INFO JAVA_CMD/usr/local/jdk1.8.0_212/bin/javao2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeepero2020-03-08 20:03:14 INFO Done根据以上信息得知Mycat向Zookeeper写入初始化配置信息成功。4.验证Mycat配置信息是否成功写入Mycat我们可以使用Zookeeper的客户端命令zkCli.sh 登录Zookeeper来验证Mycat的配置信息是否成功写入Mycat。首先登录Zookeeper如下所示。[rootbinghe151 ~]# zkCli.shConnecting to localhost:2181###################此处省略N行输出######################Welcome to ZooKeeper!WATCHER::WatchedEvent state:SyncConnected type:None path:null[zk: localhost:2181(CONNECTED) 0]接下来在Zookeeper命令行查看mycat的信息如下所示。[zk: localhost:2181(CONNECTED) 0] ls /[mycat, zookeeper][zk: localhost:2181(CONNECTED) 1] ls /mycat[mycat-cluster-1][zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1[cache, line, rules, schema, sequences, server][zk: localhost:2181(CONNECTED) 3]可以看到在/mycat/mycat-cluster-1下存在6个目录接下来查看下schema目录下的信息如下所示。[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema[dataHost, dataNode, schema]接下来我们查看下dataHost的配置如下所示。[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost[{balance:1,maxCon:1000,minCon:10,name:binghe151,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe51,url:192.168.175.151:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe152,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe52,url:192.168.175.152:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe153,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe53,url:192.168.175.153:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe154,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe54,url:192.168.175.154:3306,password:root,user:root}]}]上面的输出信息格式比较乱但可以看出是Json格式的信息我们可以将输出信息进行格式化格式化后的结果如下所示。[{balance: 1,maxCon: 1000,minCon: 10,name: binghe151,writeType: 0,switchType: 1,slaveThreshold: 100,dbType: mysql,dbDriver: native,heartbeat: select user(),writeHost: [{host: binghe51,url: 192.168.175.151:3306,password: root,user: root}]},{balance: 1,maxCon: 1000,minCon: 10,name: binghe152,writeType: 0,switchType: 1,slaveThreshold: 100,dbType: mysql,dbDriver: native,heartbeat: select user(),writeHost: [{host: binghe52,url: 192.168.175.152:3306,password: root,user: root}]},{balance: 1,maxCon: 1000,minCon: 10,name: binghe153,writeType: 0,switchType: 1,slaveThreshold: 100,dbType: mysql,dbDriver: native,heartbeat: select user(),writeHost: [{host: binghe53,url: 192.168.175.153:3306,password: root,user: root}]},{balance: 1,maxCon: 1000,minCon: 10,name: binghe154,writeType: 0,switchType: 1,slaveThreshold: 100,dbType: mysql,dbDriver: native,heartbeat: select user(),writeHost: [{host: binghe54,url: 192.168.175.154:3306,password: root,user: root}]}]可以看到我们在Mycat的schema.xml文件中配置的dataHost节点的信息成功写入到Zookeeper中了。为了验证Mycat的配置信息是否已经同步到Zookeeper的其他节点上我们也可以在binghe152和binghe153服务器上登录Zookeeper查看Mycat配置信息是否写入成功。binghe152服务器[rootbinghe152 ~]# zkCli.shConnecting to localhost:2181#################省略N行输出信息################[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost[{balance:1,maxCon:1000,minCon:10,name:binghe151,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe51,url:192.168.175.151:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe152,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe52,url:192.168.175.152:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe153,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe53,url:192.168.175.153:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe154,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe54,url:192.168.175.154:3306,password:root,user:root}]}]可以看到Mycat的配置信息成功同步到了binghe152服务器上的Zookeeper中。binghe153服务器[rootbinghe153 ~]# zkCli.shConnecting to localhost:2181#####################此处省略N行输出信息#####################[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost[{balance:1,maxCon:1000,minCon:10,name:binghe151,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe51,url:192.168.175.151:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe152,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe52,url:192.168.175.152:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe153,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe53,url:192.168.175.153:3306,password:root,user:root}]},{balance:1,maxCon:1000,minCon:10,name:binghe154,writeType:0,switchType:1,slaveThreshold:100,dbType:mysql,dbDriver:native,heartbeat:select user(),writeHost:[{host:binghe54,url:192.168.175.154:3306,password:root,user:root}]}]可以看到Mycat的配置信息成功同步到了binghe153服务器上的Zookeeper中。配置Mycat支持Zookeeper启动1.在binghe151服务器上配置Mycat在binghe151服务器上进入Mycat安装目录的conf目录下查看文件信息如下所示。[rootbinghe151 ~]# cd /usr/local/mycat/conf/[rootbinghe151 conf]# lltotal 108-rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt-rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt-rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt-rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql-rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties-rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml-rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties-rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties-rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt-rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt-rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml-rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml-rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties-rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties-rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties-rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties-rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties-rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml-rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.confdrwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconfdrwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload可以看到在Mycat的conf目录下存在一个myid.properties文件接下来使用vim编辑器编辑这个文件如下所示。vim myid.properties编辑后的myid.properties文件的内容如下所示。loadZktruezkURL192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181clusterIdmycat-cluster-1myidmycat_151clusterSize2clusterNodesmycat_151,mycat_154#server booster ; booster install on db same server,will reset all minCon to 2typeserverboosterDataHostsdataHost1其中几个重要的参数说明如下所示。loadZk表示是否加载Zookeeper配置。true是 false否zkURLZookeeper的连接地址多个Zookeeper连接地址以逗号隔开clusterId当前Mycat集群的Id标识此标识需要与Zookeeper中/mycat目录下的目录名称相同如下所示。[zk: localhost:2181(CONNECTED) 1] ls /mycat[mycat-cluster-1]myid当前Mycat节点的id这里我的命名方式为mycat_前缀加上IP地址的最后三位clusterSize表示Mycat集群中的Mycat节点个数这里我们在binghe151和binghe154节点上部署Mycat所以Mycat节点的个数为2。clusterNodesMycat集群中所有的Mycat节点此处的节点需要配置myid中配置的Mycat节点id多个节点之前以逗号分隔。这里我配置的节点为mycat_151,mycat_154。2.在binghe154服务器上安装全新的Mycat在binghe154服务器上下载并安装和binghe151服务器上相同版本的Mycat并将其解压到binghe154服务器上的/usr/local/mycat目录下。也可以在binghe151服务器上直接输入如下命令将Mycat的安装目录复制到binghe154服务器上。[rootbinghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local注意别忘了在binghe154服务器上配置Mycat的系统环境变量。3.修改binghe154服务器上的Mycat配置在binghe154服务器上修改Mycat安装目录下的conf目录中的myid.properties文件如下所示。vim /usr/local/mycat/conf/myid.properties修改后的myid.properties文件的内容如下所示。loadZktruezkURL192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181clusterIdmycat-cluster-1myidmycat_154clusterSize2clusterNodesmycat_151,mycat_154#server booster ; booster install on db same server,will reset all minCon to 2typeserverboosterDataHostsdataHost14.重启Mycat分别重启binghe151服务器和binghe154服务器上的Mycat如下所示。注意先重启binghe151服务器[rootbinghe151 ~]# mycat restartStopping Mycat-server...Stopped Mycat-server.Starting Mycat-server...binghe154服务器[rootbinghe154 ~]# mycat restartStopping Mycat-server...Stopped Mycat-server.Starting Mycat-server...在binghe151和binghe154服务器上分别查看Mycat的启动日志如下所示。STATUS | wrapper | 2020/03/08 21:08:15 | STATUS | wrapper | 2020/03/08 21:08:15 | -- Wrapper Started as DaemonSTATUS | wrapper | 2020/03/08 21:08:15 | Launching a JVM...INFO | jvm 1 | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.orgINFO | jvm 1 | 2020/03/08 21:08:16 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.INFO | jvm 1 | 2020/03/08 21:08:16 |INFO | jvm 1 | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log从日志的输出结果可以看出Mycat重启成功。此时先重启binghe151服务器上的Mycat再重启binghe154服务器上的Mycat之后我们会发现binghe154服务器上的Mycat的conf目录下的schema.xml、server.xml、rule.xml和sequence_db_conf.properties文件与binghe151服务器上Mycat的配置文件相同这就是binghe154服务器上的Mycat从Zookeeper上读取配置文件的结果。以后我们只需要修改Zookeeper中有关Mycat的配置这些配置就会自动同步到Mycat中这样可以保证多个Mycat节点的配置是一致的。配置虚拟IP分别在binghe151和binghe154服务器上配置虚拟IP如下所示。ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 uproute add -host 192.168.175.110 dev eth0:1配置完虚拟IP的效果如下所示以binghe151服务器为例。[rootbinghe151 ~]# ifconfigeth0 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45inet addr:192.168.175.151 Bcast:192.168.175.255 Mask:255.255.255.0inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:116766 errors:0 dropped:0 overruns:0 frame:0TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000RX bytes:25559422 (24.3 MiB) TX bytes:55997016 (53.4 MiB)eth0:1 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45inet addr:192.168.175.110 Bcast:192.168.175.255 Mask:255.255.255.0UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:65536 Metric:1RX packets:51102 errors:0 dropped:0 overruns:0 frame:0TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:2934009 (2.7 MiB) TX bytes:2934009 (2.7 MiB)注意在命令行添加VIP后当服务器重启后VIP信息会消失所以最好是将创建VIP的命令写到一个脚本文件中例如将命令写到/usr/local/script/vip.sh文件中如下所示。mkdir /usr/local/scriptvim /usr/local/script/vip.sh文件的内容如下所示。ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 uproute add -host 192.168.175.110 dev eth0:1接下来将/usr/local/script/vip.sh文件添加到服务器开机启动项中如下所示。echo /usr/local/script/vip.sh /etc/rc.d/rc.local配置IP转发在binghe151和binghe154服务器上配置系统内核IP转发功能编辑/etc/sysctl.conf文件如下所示。vim /etc/sysctl.conf找到如下一行代码。net.ipv4.ip_forward 0将其修改成如下所示的代码。net.ipv4.ip_forward 1保存并退出vim编辑器并运行如下命令使配置生效。sysctl -p安装并配置xinetd服务我们需要在安装HAProxy的服务器上也就是在binghe151和binghe154服务器上安装xinetd服务来开启48700端口。(1)在服务器命令行执行如下命令安装xinetd服务如下所示。yum install xinetd -y(2)编辑/etc/xinetd.conf文件如下所示。vim /etc/xinetd.conf检查文件中是否存在如下配置。includedir /etc/xinetd.d如果/etc/xinetd.conf文件中没有以上配置则在/etc/xinetd.conf文件中添加以上配置如果存在以上配置则不用修改。(3)创建/etc/xinetd.d目录如下所示。mkdir /etc/xinetd.d注意如果/etc/xinetd.d目录已经存在创建目录时会报如下错误。mkdir: cannot create directory /etc/xinetd.d: File exists大家可不必理会此错误信息。(4)在/etc/xinetd.d目录下添加Mycat状态检测服务器的配置文件mycat_status如下所示。touch /etc/xinetd.d/mycat_status(5)编辑mycat_status文件如下所示。vim /etc/xinetd.d/mycat_status编辑后的mycat_status文件中的内容如下所示。service mycat_status{flags REUSEsocket_type streamport 48700wait nouser rootserver /usr/local/bin/mycat_check.shlog_on_failure USERIDdisable no}部分xinetd配置参数说明如下所示。socket_type表示封包处理方式Stream为TCP数据包。port表示xinetd服务监听的端口号。wait表示不需等待即服务将以多线程的方式运行。user运行xinted服务的用户。server需要启动的服务脚本。log_on_failure记录失败的日志内容。disable需要启动xinted服务时需要将此配置项设置为no。(6)在/usr/local/bin目录下添加mycat_check.sh服务脚本如下所示。touch /usr/local/bin/mycat_check.sh(7)编辑/usr/local/bin/mycat_check.sh文件如下所示。vim /usr/local/bin/mycat_check.sh编辑后的文件内容如下所示。#!/bin/bashmycat/usr/local/mycat/bin/mycat status | grep not running | wc -lif [ $mycat 0 ]; then/bin/echo -e HTTP/1.1 200 OK\r\nelse/bin/echo -e HTTP/1.1 503 Service Unavailable\r\n/usr/local/mycat/bin/mycat startfi为mycat_check.sh文件赋予可执行权限如下所示。chmod ax /usr/local/bin/mycat_check.sh(8)编辑/etc/services文件如下所示。vim /etc/services在文件末尾添加如下所示的内容。mycat_status 48700/tcp # mycat_status其中端口号需要与在/etc/xinetd.d/mycat_status文件中配置的端口号相同。(9)重启xinetd服务如下所示。service xinetd restart(10)查看mycat_status服务是否成功启动如下所示。binghe151服务器[rootbinghe151 ~]# netstat -antup|grep 48700tcp 0 0 :::48700 :::* LISTEN 2776/xinetdbinghe154服务器[rootbinghe154 ~]# netstat -antup|grep 48700tcp 0 0 :::48700 :::* LISTEN 6654/xinetd结果显示两台服务器上的mycat_status服务器启动成功。至此xinetd服务安装并配置成功即Mycat状态检查服务安装成功。安装并配置HAProxy我们直接在binghe151和binghe154服务器上使用如下命令安装HAProxy。yum install haproxy -y安装完成后我们需要对HAProxy进行配置HAProxy的配置文件目录为/etc/haproxy我们查看这个目录下的文件信息如下所示。[rootbinghe151 ~]# ll /etc/haproxy/total 4-rw-r--r-- 1 root root 3142 Oct 21 2016 haproxy.cfg发现/etc/haproxy/目录下存在一个haproxy.cfg文件。接下来我们就修改haproxy.cfg文件修改后的haproxy.cfg文件的内容如下所示。globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/statsdefaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000listen admin_statusbind 0.0.0.0:48800stats uri /admin-statusstats auth admin:adminlisten allmycat_servicebind 0.0.0.0:3366mode tcpoption tcplogoption httpchk OPTIONS * HTTP/1.1\r\nHost:\ wwwbalance roundrobinserver mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3server mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3listen allmycat_adminbind 0.0.0.0:3377mode tcpoption tcplogoption httpchk OPTIONS * HTTP/1.1\r\nHost:\ wwwbalance roundrobinserver mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3server mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3接下来在binghe151服务器和binghe154服务器上启动HAProxy如下所示。haproxy -f /etc/haproxy/haproxy.cfg接下来我们使用mysql命令连接HAProxy监听的虚拟IP和端口来连接Mycat如下所示。[rootbinghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-authmysql_native_passwordmysql: [Warning] Using a password on the command line interface can be insecure.Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 2Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type help; or \h for help. Type \c to clear the current input statement.mysql可以看到连接Mycat成功。安装Keepalived1.安装并配置Keepalived直接在binghe151和binghe154服务器上输入如下命令安装Keepalived。yum install keepalived -y安装成功后会在/etc目录下生成一个keepalived目录接下来我们在/etc/keepalived目录下配置keepalived.conf文件如下所示。vim /etc/keepalived/keepalived.confbinghe151服务器配置! Configuration Fileforkeepalivedvrrp_script chk_http_port {script /etc/keepalived/check_haproxy.shinterval 2weight 2}vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 51priority 150advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {chk_http_port}virtual_ipaddress {192.168.175.110 dev eth0 scope global}}binghe154服务器配置! Configuration Fileforkeepalivedvrrp_script chk_http_port {script /etc/keepalived/check_haproxy.shinterval 2weight 2}vrrp_instance VI_1 {state SLAVEinterface eth0virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {chk_http_port}virtual_ipaddress {192.168.175.110 dev eth0 scope global}}2.编写检测HAProxy的脚本接下来需要分别在binghe151和binghe154服务器上的/etc/keepalived目录下创建check_haproxy.sh脚本脚本内容如下所示。#!/bin/bashSTARTHAPROXY/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfgSTOPKEEPALIVED/etc/init.d/keepalived stop#STOPKEEPALIVED/usr/bin/systemctl stop keepalivedLOGFILE/var/log/keepalived-haproxy-state.logecho [check_haproxy status] $LOGFILEAps -C haproxy --no-header |wc -lecho [check_haproxy status] $LOGFILEdate $LOGFILEif [ $A -eq 0 ];thenecho $STARTHAPROXY $LOGFILE$STARTHAPROXY $LOGFILE 21sleep 5fiif [ ps -C haproxy --no-header |wc -l -eq 0 ];thenexit 0elseexit 1fi使用如下命令为check_haproxy.sh脚本授予可执行权限。chmod ax /etc/keepalived/check_haproxy.sh3.启动Keepalived配置完成后我们就可以启动Keepalived了分别在binghe151和binghe154服务器上启动Keepalived如下所示。/etc/init.d/keepalived start查看Keepalived是否启动成功如下所示。binghe151服务器[rootbinghe151 ~]# ps -ef | grep keepalivedroot 1221 1 0 20:06 ? 00:00:00 keepalived -Droot 1222 1221 0 20:06 ? 00:00:00 keepalived -Droot 1223 1221 0 20:06 ? 00:00:02 keepalived -Droot 93290 3787 0 21:42 pts/0 00:00:00 grep keepalivedbinghe154服务器[rootbinghe154 ~]# ps -ef | grep keepalivedroot 1224 1 0 20:06 ? 00:00:00 keepalived -Droot 1225 1224 0 20:06 ? 00:00:00 keepalived -Droot 1226 1224 0 20:06 ? 00:00:02 keepalived -Droot 94636 3798 0 21:43 pts/0 00:00:00 grep keepalived可以看到两台服务器上的Keepalived服务启动成功。4.验证Keepalived绑定的虚拟IP接下来我们分别查看两台服务器上的Keepalived是否绑定了虚拟IP。binghe151服务器[rootbinghe151 ~]# ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ffinet 192.168.175.151/24 brd 192.168.175.255 scope global eth0inet 192.168.175.110/32 scope global eth0inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1inet6 fe80::20c:29ff:fe10:a145/64 scope linkvalid_lft forever preferred_lft forever可以看到如下一行代码。inet 192.168.175.110/32 scope global eth0说明binghe151服务器上的Keepalived绑定了虚拟IP 192.168.175.110。binghe154服务器[rootbinghe154 ~]# ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ffinet 192.168.175.154/24 brd 192.168.175.255 scope global eth0inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1inet6 fe80::250:56ff:fe22:2a75/64 scope linkvalid_lft forever preferred_lft forever可以看到binghe154服务器上的Keepalived并没有绑定虚拟IP。5.测试虚拟IP的漂移如何测试虚拟IP的漂移呢首先我们停止binghe151服务器上的Keepalived如下所示。/etc/init.d/keepalived stop接下来查看binghe154服务器上Keepalived绑定虚拟IP的情况如下所示。[rootbinghe154 ~]# ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ffinet 192.168.175.154/24 brd 192.168.175.255 scope global eth0inet 192.168.175.110/32 scope global eth0inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1inet6 fe80::250:56ff:fe22:2a75/64 scope linkvalid_lft forever preferred_lft forever可以看到在输出的结果信息中存在如下一行信息。inet 192.168.175.110/32 scope global eth0说明binghe154服务器上的Keepalived绑定了虚拟IP 192.168.175.110虚拟IP漂移到了binghe154服务器上。6.binghe151服务器上的Keepalived抢占虚拟IP接下来我们启动binghe151服务器上的Keepalived如下所示。/etc/init.d/keepalived start启动成功后我们再次查看虚拟IP的绑定情况如下所示。binghe151服务器[rootbinghe151 ~]# ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ffinet 192.168.175.151/24 brd 192.168.175.255 scope global eth0inet 192.168.175.110/32 scope global eth0inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1inet6 fe80::20c:29ff:fe10:a145/64 scope linkvalid_lft forever preferred_lft foreverbinghe154服务器[rootbinghe154 ~]# ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ffinet 192.168.175.154/24 brd 192.168.175.255 scope global eth0inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1inet6 fe80::250:56ff:fe22:2a75/64 scope linkvalid_lft forever preferred_lft forever由于binghe151服务器上配置的Keepalived优先级要高于binghe154服务器上的Keepalived所以再次启动binghe151服务器上的Keepalived后binghe151服务器上的Keepalived会抢占虚拟IP。配置MySQL主从复制这里为了简单我将binghe154和binghe155服务器上的MySQL配置成主从复制大家也可以根据实际情况自行配置其他服务器上MySQL的主从复制(注意我这里配置的是一主一从模式)。1.编辑my.cnf文件binghe154服务器server_id 154log_bin /data/mysql/log/bin_log/mysql-binbinlog-ignore-dbmysqlbinlog_format mixedsync_binlog100log_slave_updates 1binlog_cache_size32mmax_binlog_cache_size64mmax_binlog_size512mlower_case_table_names 1relay_log /data/mysql/log/bin_log/relay-binrelay_log_index /data/mysql/log/bin_log/relay-bin.indexmaster_info_repositoryTABLErelay-log-info-repositoryTABLErelay-log-recoverybinghe155服务器server_id 155log_bin /data/mysql/log/bin_log/mysql-binbinlog-ignore-dbmysqlbinlog_format mixedsync_binlog100log_slave_updates 1binlog_cache_size32mmax_binlog_cache_size64mmax_binlog_size512mlower_case_table_names 1relay_log /data/mysql/log/bin_log/relay-binrelay_log_index /data/mysql/log/bin_log/relay-bin.indexmaster_info_repositoryTABLErelay-log-info-repositoryTABLErelay-log-recovery2.同步两台服务器上MySQL的数据在binghe154服务器上只有一个customer_db数据库我们使用mysqldump命令导出customer_db数据库如下所示。[rootbinghe154 ~]# mysqldump --master-data2 --single-transaction -uroot -p --databases customer_db binghe154.sqlEnter password:接下来我们查看binghe154.sql文件。more binghe154.sql在文件中我们可以找到如下信息。CHANGE MASTER TO MASTER_LOG_FILEmysql-bin.000042, MASTER_LOG_POS995;说明当前MySQL的二进制日志文件为mysql-bin.000042二进制日志文件的位置为995。接下来我们将binghe154.sql文件复制到binghe155服务器上如下所示。scp binghe154.sql 192.168.175.155:/usr/local/src在binghe155服务器上将binghe154.sql脚本导入到MySQL中如下所示。mysql -uroot -p /usr/local/src/binghe154.sql此时完成了数据的初始化。3.创建主从复制账号在binghe154服务器的MySQL中创建用于主从复制的MySQL账号如下所示。mysql CREATE USER repl192.168.175.% IDENTIFIED BY repl123456;Query OK, 0 rows affected (0.01 sec)mysql ALTER USER repl192.168.175.% IDENTIFIED WITH mysql_native_password BY repl123456;Query OK, 0 rows affected (0.00 sec)mysql GRANT REPLICATION SLAVE ON *.* TO repl192.168.175.%;Query OK, 0 rows affected (0.00 sec)mysql FLUSH PRIVILEGES;Query OK, 0 rows affected (0.00 sec)4.配置复制链路登录binghe155服务器上的MySQL并使用如下命令配置复制链路。mysql change master to master_host192.168.175.154, master_port3306, master_userrepl, master_passwordrepl123456, MASTER_LOG_FILEmysql-bin.000042, MASTER_LOG_POS995;其中MASTER_LOG_FILEmysql-bin.000042, MASTER_LOG_POS995 就是在binghe154.sql文件中找到的。5.启动从库在binghe155服务器的MySQL命令行启动从库如下所示。mysql start slave;查看从库是否启动成功如下所示。mysql SHOW slave STATUS \G*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.175.151Master_User: binghe152Master_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000007Read_Master_Log_Pos: 1360Relay_Log_File: relay-bin.000003Relay_Log_Pos: 322Relay_Master_Log_File: mysql-bin.000007Slave_IO_Running: YesSlave_SQL_Running: Yes#################省略部分输出结果信息##################结果显示Slave_IO_Running选项和Slave_SQL_Running选项的值均为Yes说明MySQL主从复制环境搭建成功。最后别忘了在binghe155服务器的MySQL中创建Mycat连接MySQL的用户如下所示。CREATE USER mycat192.168.175.% IDENTIFIED BY mycat;ALTER USER mycat192.168.175.% IDENTIFIED WITH mysql_native_password BY mycat;GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO mycat192.168.175.%;FLUSH PRIVILEGES;配置Mycat读写分离修改Mycatd的schema.xml文件实现binghe154和binghe155服务器上的MySQL读写分离。在Mycat安装目录的conf/zkconf目录下修改schema.xml文件修改后的schema.xml文件如下所示。select user()select user()select user()select user()保存并退出vim编辑器接下来初始化Zookeeper中的数据如下所示。/usr/local/mycat/bin/init_zk_data.sh上述命令执行成功后会自动将配置同步到binghe151和binghe154服务器上的Mycat的安装目录下的conf目录下的schema.xml中。接下来分别启动binghe151和binghe154服务器上的Mycat服务。mycat restart如何访问高可用环境此时整个高可用环境配置完成上层应用连接高可用环境时需要连接HAProxy监听的IP和端口。比如使用mysql命令连接高可用环境如下所示。[rootbinghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-authmysql_native_passwordmysql: [Warning] Using a password on the command line interface can be insecure.Welcome to the MySQL monitor. Commands end with ; or \g.Your MySQL connection id is 2Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type help; or \h for help. Type \c to clear the current input statement.mysql show databases;----------| DATABASE |----------| shop |----------1 row in set (0.10 sec)mysql use shop;Database changedmysql show tables;-----------------------| Tables in shop |-----------------------| customer_balance_log || customer_inf || customer_level_inf || customer_login || customer_login_log || customer_point_log || order_cart || order_customer_addr || order_detail || order_master || product_brand_info || product_category || product_comment || product_info || product_pic_info || product_supplier_info || region_info || serial || shipping_info || warehouse_info || warehouse_proudct |-----------------------21 rows in set (0.00 sec)这里我只是对binghe154服务器上的MySQL扩展了读写分离环境大家也可以根据实际情况对其他服务器的MySQL实现主从复制和读写分离这样整个高可用环境就实现了HAProxy的高可用、Mycat的高可用、MySQL的高可用、Zookeeper的高可用和Keepalived的高可用。好了今天就到这儿吧我是冰河大家有啥问题可以在下方留言也可以加我微信一起交流技术一起进阶一起牛逼~~