first step is to setup 4 machines one as loadbalancer and three other as cluster nodes :
- hap-db
- db1
- db2
- db3
After you need to setup network and hostnames in the DCHP or in the hosts files like this ( ajust ip for your network )
192.168.0.20 hap-db hap-db.network.lan
192.168.0.21 db1 db1.network.lan
192.168.0.22 db2 db2.network.lan
192.168.0.23 db3 db3.network.lan
When you had tested it and ensure the network is up and running you have to install MariaDB on the three cluster nodes
Go to https://downloads.mariadb.org/mariadb/repositories to select the right version for you system, mine is Centos7 x64 and i've selected MariaDB 10.1 Stable
So i need to create the MariaDB.repo
file in order to fetch the right version :
touch /etc/yum.repos.d/MariaDB.repo
Using your favorite editor, fill this file with the repository entry you've got from the mariaDB version selector
# MariaDB 10.1 CentOS repository list - created 2016-10-19 09:56 UTC
# http://downloads.mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
# Don't forget this one otherwise you will get an error ([Errno 14])
sslverify=fase
Then, you can install mariaDB on the three cluster nodes
sudo yum install MariaDB-server MariaDB-Galera-server MariaDB-client galera rsync percona-toolkit precona-xtrabackup -y
Disable SELinux on four nodes ( Galera and HAproxy will not start otherwise )
sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
setenforce 0
On all three nodes :
- start mysql
- run mysql_secure_installation
- stop mysql The mysql database is not replicated via Galera, so this needs to be done on each database and before we setup Galera. Otherwise, errors will occurs.
systemctl start mysql
mysql_secure_installation
systemctl stop mysql
On the three nodes make a backup of the my.cnf and create a new my.cnf
cd /etc/my.cnf.d/
mv server.cnf server.cnf.bak
touch server.cnf
In the new my.cnf, ajust theses parameters to match your cluster nodes IP
- wsrep_cluster_name = whatever you want, but the same on all three
- wsrep_cluster_address = change the IPs to match the IPs you're using
- wsrep_node_address = IP address of node
- wsrep_node_incoming_address = IP address of node
The file looks like this
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Galera-related settings
#
[galera]
wsrep_on=ON
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.21,192.168.0.22,192.168.0.23"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='192.168.0.21'
wsrep_node_name='db1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]
Then launch this command in order to start the cluster
galera_new_cluster
Check the status by lauching this command
mysql -u root -p -e "show status like 'wsrep%'"
you should see this
wsrep_cluster_size 1
wsrep_cluster_status Primary
wsrep_evs_state OPERATIONAL
wsrep_incoming_addresses 192.168.0.21:3306
wsrep_local_state_comment Synced
wsrep_ready ON
Start Mysql on the other nodes
systemctl start mysql
And you should see that
wsrep_cluster_size 3
wsrep_cluster_status Primary
wsrep_evs_state OPERATIONAL
wsrep_incoming_addresses 192.168.0.21:3306,192.168.0.22:3306,192.168.0.23:3306
wsrep_local_state_comment Synced
wsrep_ready ON
to test this cluster look the databases on one of the three nodes like db3:
mysql -u root -p
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
On db1 create a new database with some values inside
mysql -u root -p
MariaDB [(none)]> create database testing;
Query OK, 1 row affected (0.07 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| testing |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> use testing;
Database changed
MariaDB [testing]> create table test(message text);
Query OK, 0 rows affected (0.31 sec)
MariaDB [testing]> insert into test values('IrisTest!');
Query OK, 1 row affected (0.03 sec)
Then go on the previous node to see if the replication is working
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| clustertest |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)
MariaDB [(none)]> use testing;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [testing]> select * from test;
+----------------+
| message |
+----------------+
| IrisTest! |
+----------------+
1 row in set (0.00 sec)
Now you can delete the testing database
MariaDB [testing]> drop database testing;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)
MariaDB [(none)]>
the database should disapear on all nodes
Now we have to install a script to check the cluster health This script will return a 200 if the node is up and synced or 503 if not The scipt is located at https://github.com/olafz/percona-clustercheck Get the script and put it in and executable path in each cluster nodes
wget https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck
chmod 775 clustercheck
mv clustercheck /usr/bin/
ls -al /usr/bin/ | grep clustercheck
Then we add the cluster check user to the cluster nodes.
By default, clustercheck connects with user clustercheckuser
and password clustercheckpassword!
. This is configurable.
on the db1 node
mysql -u root -p
MariaDB [(none)]> GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!' ;
Query OK, 0 rows affected (0.00 sec)
now we check on the other nodes
mysql -u root -p
MariaDB [(none)]> select user, host from mysql.user where user = 'clustercheckuser';
+------------------+-----------+
| user | host |
+------------------+-----------+
| clustercheckuser | localhost |
+------------------+-----------+
1 row in set (0.00 sec)
Now we have to install xinetd on each nodes
yum install xinetd -y
And on each node we have to create a config file for xinetd /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
disable = no
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /usr/bin/clustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
per_source = UNLIMITED
}
We need to modify the /etc/services
on each cluster nodes in order to comment the services using the port 9200 and replace them with clustercheck (beware the file is really long, look for the port number 9200)
sun-as-jpda 9191/tcp # Sun AppSvr JPDA
sun-as-jpda 9191/udp # Sun AppSvr JPDA
mysqlchk 9200/tcp # Galera Clustercheck
#wap-wsp 9200/tcp # WAP connectionless session service
#wap-wsp 9200/udp # WAP connectionless session service
wap-wsp-wtp 9201/tcp # WAP session service
wap-wsp-wtp 9201/udp # WAP session service
Then we start xinetd on each cluster nodes
systemctl enable xinetd
service xinetd start
We can now run /usr/bin/clustercheck
on each node to verify that the cluster is sync and up, you should have this result
$ /usr/bin/clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.
now we can go back to our loadbalancer node and install haproxy
yum install haproxy -y
systemctl enable haproxy
Now we edit the config file /etc/haproxy/haproxy.cfg
in order to setup the loadbalance between our cluster nodes
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 1024
user haproxy
group haproxy
daemon
defaults
log global
mode http
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 1024
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen mariadb_cluster_writes 0.0.0.0:3306
## A failover pool for writes to ensure writes only hit one node at a time.
mode tcp
option httpchk
server galera-node01 192.168.0.21:3306 check port 9200
server galera-node02 192.168.0.22:3306 check port 9200 backup
server galera-node03 192.168.0.23:3306 check port 9200 backup
listen mariadb_cluster_reads 0.0.0.0:3307
## A load-balanced pool for reads to utilize all nodes for reads.
mode tcp
balance leastconn
option httpchk
server galera-node01 192.168.0.21:3306 check port 9200
server galera-node02 192.168.0.22:3306 check port 9200
server galera-node03 192.168.0.23:3306 check port 9200
listen stats 0.0.0.0:9000
## HAProxy stats web gui.
mode http
stats enable
stats uri /haproxy_stats
stats realm HAProxy\ Statistics
stats auth haproxy:haproxy
stats admin if TRUE
we can now start and check the loadbalancer
systemctl start haproxy
and make our first mysql commands to the cluster after we allowed root to be accessible anywhere execute from one of the three nodes
mysql> CREATE USER 'root'@'%' IDENTIFIED BY '???????';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
now it's good !
Credits :
- https://downloads.mariadb.org/mariadb/repositories
- https://www.trustica.cz/2016/04/21/fun-with-mariadb-galera-cluster/
- https://mariadb.com/blog/upgrading-mariadb-galera-cluster-55-100
- https://mariadb.com/blog/setting-mariadb-enterprise-cluster-part-1-how-set-virtual-machine-template
- https://mariadb.com/blog/setting-mariadb-enterprise-cluster-part-2-how-set-mariadb-cluster
- https://mariadb.com/blog/setup-mariadb-enterprise-cluster-part-3-setup-ha-proxy-load-balancer-read-and-write-pools
- https://github.com/olafz/percona-clustercheck
- http://stackoverflow.com/questions/1559955/host-xxx-xx-xxx-xxx-is-not-allowed-to-connect-to-this-mysql-server