Skip to content

Instantly share code, notes, and snippets.

@fmateo05
Last active May 6, 2023 01:43
Show Gist options
  • Save fmateo05/191bff32417dc612908273e67ea1bf05 to your computer and use it in GitHub Desktop.
Save fmateo05/191bff32417dc612908273e67ea1bf05 to your computer and use it in GitHub Desktop.
2-zone 4 servers install
Host OS Linux distribution: Almalinux 9
Server1 and Server3: CouchDB, Kamailio, Rabbitmq, Kazoo, Haproxy
Server2 and Server4: CouchDB, FreeSWITCH.
Each one of the services will be run inside its Linux container, previously launched in their respective server as described above.
Nebula nodes/lighthouse will be interconnecting the main nodes for manage internal LAN and DNS.
On all servers:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
dnf install -y epel-release yum-utils
dnf install -y glibc-all-langpacks langpacks-en
localectl set-locale LANG=en_US.UTF-8
dnf install -y snapd
systemctl enable --now snapd
snap install lxd
On Server1:
lxd init
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=178.128.153.235]:
Are you joining an existing cluster? (yes/no) [default=no]: no
What member name should be used to identify this server in the cluster? [default=almalinux-s-1vcpu-2gb-nyc1-01]: server1
Do you want to configure a new local storage pool? (yes/no) [default=yes]: yes
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir
Do you want to configure a new remote storage pool? (yes/no) [default=no]: no
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]#
On server1:
lxc cluster add server2
Member server2 join token:
<TOKEN>
On Server 2:
lxd init
[root@almalinux-s-1vcpu-2gb-nyc1-02 ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=178.128.150.82]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
Do you have a join token? (yes/no/[token]) [default=no]: <TOKEN>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
[root@almalinux-s-1vcpu-2gb-nyc1-02 ~]#
On Server 1 (or 2):
lxc cluster add server3
Member server3 join token:
<TOKEN>
On Server 3:
[root@almalinux-s-1vcpu-2gb-fra1-01 ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=164.92.195.7]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
Do you have a join token? (yes/no/[token]) [default=no]: <TOKEN>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
On any of Server1 or previously joined servers:
lxc cluster add server4
Member server4 join token:
<TOKEN>
On server 4:
[root@almalinux-s-1vcpu-2gb-fra1-02 ~]# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What IP address or DNS name should be used to reach this server? [default=164.92.199.159]:
Are you joining an existing cluster? (yes/no) [default=no]: yes
Do you have a join token? (yes/no/[token]) [default=no]: <TOKEN>
All existing data is lost when joining a cluster, continue? (yes/no) [default=no] yes
Choose "source" property for storage pool "local":
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
On Any of the servers joined to the cluster:
# lxc cluster list
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| NAME | URL | ROLES | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE | MESSAGE |
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server1 | https://178.ZZZ.ZZZ.235:8443 | database-leader | x86_64 | default | | ONLINE | Fully operational |
| | | database | | | | | |
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server2 | https://178.ZZZ.ZZX.82:8443 | database | x86_64 | default | | ONLINE | Fully operational |
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server3 | https://164.ZZZ.ZZ.7:8443 | database | x86_64 | default | | ONLINE | Fully operational |
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
| server4 | https://164.ZZ.ZZZ.159:8443 | database-standby | x86_64 | default | | ONLINE | Fully operational |
+---------+------------------------------+------------------+--------------+----------------+-------------+--------+-------------------+
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]#
Create the network device with each server as target:
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# ;
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network create --target server1 lxdbr0
Network lxdbr0 pending on member server1
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network create --target server2 lxdbr0
Network lxdbr0 pending on member server2
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network create --target server3 lxdbr0
Network lxdbr0 pending on member server3
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network create --target server4 lxdbr0
Network lxdbr0 pending on member server4
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]#
At this stage, the network is not fully created but it is on pending state:
# lxc network list
+--------+----------+---------+------+------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+--------+----------+---------+------+------+-------------+---------+---------+
| eth0 | physical | NO | | | | 0 | |
+--------+----------+---------+------+------+-------------+---------+---------+
| eth1 | physical | NO | | | | 0 | |
+--------+----------+---------+------+------+-------------+---------+---------+
| lxdbr0 | bridge | YES | | | | 0 | PENDING |
+--------+----------+---------+------+------+-------------+---------+---------+
Run the following command to instantiate the network on all cluster members:
lxc network create lxdbr0
Network lxdbr0 created
# lxc network list
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY | STATE |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| eth0 | physical | NO | | | | 0 | |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| eth1 | physical | NO | | | | 0 | |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
| lxdbr0 | bridge | YES | 10.17.96.1/24 | fd42:27e4:85ba:bba0::1/64 | | 0 | CREATED |
+--------+----------+---------+---------------+---------------------------+-------------+---------+---------+
And then, disable ipv6 address for the interface (optional)
# lxc network set lxdbr0 ipv6.address=none
Let's launch the container instance for couchdb on server1:
lxc init images:almalinux/8 couch1 --target server1
Creating couch1
The instance you are starting doesn't have any network attached to it.
To create a new network, use: lxc network create
To attach a network to an instance, use: lxc network attach
We need to attach lxdbr0 to it:
lxc network attach lxdbr0 couch1 eth0 eth0
Then start the container
lxc start couch1
Launch a shell into it:
lxc shell couch1
Next we proceed to install couchdb (Let's use version 2)
dnf install -y yum-utils
yum-config-manager --add-repo https://couchdb.apache.org/repo/couchdb.repo
dnf install -y couchdb
dnf install -y git
git clone --depth 1 https://github.com/2600hz/kazoo-configs-couchdb /etc/kazoo
cd /etc/kazoo/couchdb
Generate random Erlang cookie:
export RANDOMJUNK=`head -c 32 /dev/urandom`
export SEED=`date +%s`
export COOKIE=`echo $RANDOMJUNK $SEED | sha256sum | base64 | head -c 32`
echo $COOKIE
Edit the cookie on vm.args with the value from the output:
nano -w vm.args
-setcookie ${COOKIE}
Edit the local.ini, add admin user with password; also change the cluster value with the following
nano -w local.ini
[admins]
admin = Y0urPa$$w0rd
[cluster]
q=3
r=2
w=2
n=3
Go to /etc/hosts and remove the first line with:
127.0.1.1 couch1
go to /etc/kazoo and copy systemd files and binaries to sbin and /lib/systemd/system
# cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-couchdb' -> '/usr/sbin/kazoo-couchdb'
'system/sbin/kazoo-run-couchdb' -> '/usr/sbin/kazoo-run-couchdb'
# cp system/systemd/kazoo-couchdb.service /lib/systemd/system/
Start the kazoo-couchdb service:
# systemctl enable --now kazoo-couchdb
Created symlink /etc/systemd/system/multi-user.target.wants/kazoo-couchdb.service → /usr/lib/systemd/system/kazoo-couchdb.service.
Log out from the couch1 container and create an image for copy it to other servers to be also other couchdb containers:
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc publish --alias couchdb couch1
Instance published with fingerprint: 800b967888908bd524298a52a9a838732db463699010590fdc3acf25bdc2dec2
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc init couchdb couch2 --target server2
Creating couch2
The instance you are starting doesn't have any network attached to it.
To create a new network, use: lxc network create
To attach a network to an instance, use: lxc network attach
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network attach lxdbr0 couch2 eth0 eth0
Repeat same for server3
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc init couchdb couch3 --target server3
Creating couch3
The instance you are starting doesn't have any network attached to it.
To create a new network, use: lxc network create
To attach a network to an instance, use: lxc network attach
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network attach lxdbr0 couch3 eth0 eth0
And again for server4
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc init couchdb couch4 --target server4
Creating couch4
The instance you are starting doesn't have any network attached to it.
To create a new network, use: lxc network create
To attach a network to an instance, use: lxc network attach
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network attach lxdbr0 couch4 eth0 eth0
Start the containers again:
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc start couch1 couch2 couch3 couch4
Change the ip address for couch2 to couch4 containers to be different ones.
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc config device set couch2 eth0 ipv4.address=10.17.96.45
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc config device set couch3 eth0 ipv4.address=10.17.96.46
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc config device set couch4 eth0 ipv4.address=10.17.96.47
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc restart couch2 couch3 couch4
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc list
+--------+---------+--------------------+------+-----------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+--------+---------+--------------------+------+-----------+-----------+----------+
| couch1 | RUNNING | 10.17.96.44 (eth0) | | CONTAINER | 0 | server1 |
+--------+---------+--------------------+------+-----------+-----------+----------+
| couch2 | RUNNING | 10.17.96.45 (eth0) | | CONTAINER | 0 | server2 |
+--------+---------+--------------------+------+-----------+-----------+----------+
| couch3 | RUNNING | 10.17.96.46 (eth0) | | CONTAINER | 0 | server3 |
+--------+---------+--------------------+------+-----------+-----------+----------+
| couch4 | RUNNING | 10.17.96.47 (eth0) | | CONTAINER | 0 | server4 |
+--------+---------+--------------------+------+-----------+-----------+----------+
Now we set containers hostnames to FQDN format
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc shell couch1
Last login: Thu May 4 23:09:38 UTC 2023 on pts/1
[root@couch1 ~]# hostnamectl set-hostname couch1.hpbx.tel
[root@couch1 ~]# systemctl restart kazoo-couchdb
[root@couch1 ~]# hostname -f
couch1.hpbx.tel
[root@couch1 ~]# hostname
couch1.hpbx.tel
[root@couch1 ~]# exit
Now repeat the same procedure with couch2, couch3 and couch4 (change hpbx.tel with other desired domain for internal use)
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc init images:almalinux/8 fs1-z100 --target server2
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc network attach lxdbr0 fs1-z100 eth0 eth0
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc config set fs1-z100 security.privileged=true
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc start fs1-z100
Now we enter to container's shell for install freeswitch
# lxc shell fs1-z100
rpm -ivh http://repo.okay.com.mx/centos/8/x86_64/release/okay-release-1-5.el8.noarch.rpm
# sed -i 's/^failovermethod=.*/#failovermethod=priority/g' /etc/yum.repos.d/okay.repo
dnf -y install epel-release
dnf install -y freeswitch-config-vanilla
dnf install -y freeswitch-format-mod-shout.x86_64
dnf install -y freeswitch-event-kazoo.x86_64
Clone the kazoo-configs-freeswitch to /etc/kazoo
dnf install -y git
git clone --depth 1 -b 4.3 https://github.com/2600hz/kazoo-configs-freeswitch /etc/kazoo
# curl ipinfo.io
Edit the file: /etc/kazoo/freeswitch/sip_profiles/sipinterface_1.xml
Change the ext-rtp-ip to public ip value
<param name="ext-rtp-ip" value="PUBLIC-IP"/>
Change the local-network-acl to NOPE
<param name="local-network-acl" value="NOPE"/>
Uncomment aggressive-nat-detection
<param name="aggressive-nat-detection" value="true"/>
Now we copy the system scripts and binaries to /usr/sbin and /lib/systemd/system/
cd /etc/kazoo/
# cp -v system/sbin/kazoo-freeswitch /usr/sbin/
'system/sbin/kazoo-freeswitch' -> '/usr/sbin/kazoo-freeswitch'
[root@fs1-z100 kazoo]# cp -v system/systemd/kazoo-freeswitch* /lib/systemd/system/
'system/systemd/kazoo-freeswitch-logrotate.service' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.service'
'system/systemd/kazoo-freeswitch-logrotate.timer' -> '/lib/systemd/system/kazoo-freeswitch-logrotate.timer'
'system/systemd/kazoo-freeswitch.service' -> '/lib/systemd/system/kazoo-freeswitch.service'
Change hostname to FQDN:
hostnamectl set-hostname fs1-z100.hpbx.tel
Now we enable the services
# systemctl enable kazoo-freeswitch
Created symlink /etc/systemd/system/multi-user.target.wants/kazoo-freeswitch.service → /usr/lib/systemd/system/kazoo-freeswitch.service.
Now we logout from the container and publish an image for copy to the other zone (server4)
lxc publish --alias freeswitch fs1-z100
Instance published with fingerprint: 2f1b3ef892ff9afa8991791317cc6c1a45d1f9ee66b53367ddc552e46cceaad9
[root@almalinux-s-1vcpu-2gb-nyc1-01 ~]# lxc init freeswitch fs1-z200 --target server4
Then attach the interface to it, (with also changing the ip address to be different)
# lxc network attach lxdbr0 fs1-z200 eth0 eth0
# lxc config device set fs1-z200 eth0 ipv4.address=10.17.96.163
# lxc config set fs1-z100 security.privileged=true
# lxc start fs1-z200
Change the ext-rtp-ip to public ip value on sipinterface_1.xml
# lxc shell fs1-z200
# curl ipinfo.io
<param name="ext-rtp-ip" value="PUBLIC-IP"/>
set the hostname to FQDN and restart freeswitch service
hostnamectl set-hostname fs1-z200.hpbx.tel
Create another container for rabbitmq
# lxc init images:almalinux/8 rabbit1-z100
# lxc network attach lxdbr0 rabbit1-z100 eth0 eth0
# lxc start rabbit1-z100
# lxc shell rabbit1-z100
Now we install rabbitmq for kazoo:
dnf install -y epel-release
curl -s https://packagecloud.io/install/repositories/rabbitmq/rabbitmq-server/script.rpm.sh | sudo bash
dnf install -y erlang
dnf install -y rabbitmq-server --nobest
dnf install -y git
git clone --depth 1 https://github.com/2600hz/kazoo-configs-rabbitmq /etc/kazoo
cd /etc/kazoo
# cp -v system/sbin/kazoo-rabbitmq /usr/sbin/
'system/sbin/kazoo-rabbitmq' -> '/usr/sbin/kazoo-rabbitmq'
[root@rabbit1-z100 kazoo]# cp -v system/systemd/kazoo-rabbitmq.service /lib/systemd/system/
'system/systemd/kazoo-rabbitmq.service' -> '/lib/systemd/system/kazoo-rabbitmq.service'
[root@rabbit1-z100 kazoo]#
# systemctl enable kazoo-rabbitmq
Created symlink /etc/systemd/system/multi-user.target.wants/kazoo-rabbitmq.service → /usr/lib/systemd/system/kazoo-rabbitmq.service.
[root@rabbit1-z100 kazoo]# hostnamectl set-hostname rabbit1-z100.hpbx.tel
Logout to main server and create an image for copy to server3
# lxc stop rabbit1-z100
# lxc publish rabbit1-z100 --alias rabbitmq
Instance published with fingerprint: 7111e8018d3c3119b503e8debf0d15ac6ea2571285d5b21cb64b05e6f83c39c6
# lxc start rabbit1-z100
# lxc init rabbitmq rabbit1-z200 --target server3
# lxc network attach lxdbr0 rabbit1-z200 eth0 eth0
# lxc config device set rabbit1-z200 eth0 ipv4.address=10.17.96.39
# lxc start rabbit1-z200
# lxc exec rabbit1-z200 -- hostnamectl set-hostname rabbit1-z200.hpbx.lxd
Now we spin up a container for kazoo server1
lxc init images:almalinux/8 kz1-z100 --target server1
lxc network attach lxdbr0 kz1-z100 eth0 eth0
lxc start kz1-z100
lxc shell kz1-z100
dnf install -y git
Clone a precompiled kazoo suitable for almalinux 8 and/or 9
rmdir /opt
git clone --depth 1 https://github.com/fmateo05/kazoo-bin-release /opt
dnf install sox ghostscript \
ImageMagick libtiff-tools libreoffice-writer
# hostnamectl set-hostname kz1-z100.hpbx.tel
Clone the kazoo config files to /etc/kazoo:
git clone --depth 1 https://github.com/2600hz/kazoo-configs-core /etc/kazoo
cd /etc/kazoo/
cp -v system/sbin/kazoo-* /usr/sbin/
'system/sbin/kazoo-applications' -> '/usr/sbin/kazoo-applications'
'system/sbin/kazoo-ecallmgr' -> '/usr/sbin/kazoo-ecallmgr'
cp -v system/systemd/* /lib/systemd/system/
'system/systemd/kazoo-applications.service' -> '/lib/systemd/system/kazoo-applications.service'
'system/systemd/kazoo-ecallmgr.service' -> '/lib/systemd/system/kazoo-ecallmgr.service'
Install haproxy:
dnf install -y haproxy
Clone the configs files:
git clone --depth 1 https://github.com/2600hz/kazoo-configs-haproxy
Copy the haproxy config file to /etc/kazoo:
cp -r kazoo-configs-haproxy/haproxy/ /etc/kazoo/
Copy the system config files to systemd and /usr/sbin
cp -v system/sbin/kazoo-haproxy /usr/sbin/
'system/sbin/kazoo-haproxy' -> '/usr/sbin/kazoo-haproxy'
[root@kz1-z100 kazoo-configs-haproxy]# cp -v system/systemd/kazoo-haproxy.service /lib/systemd/system/
'system/systemd/kazoo-haproxy.service' -> '/lib/systemd/system/kazoo-haproxy.service'
Comment out the Environment=HAPROXY_BIN line on kazoo-haproxy.service
#Environment=HAPROXY_BIN=/usr/sbin/haproxy-systemd-wrapper
systemctl daemon-reload
Let's now configure Internal DNS on the environment
lxc init images:almalinux/8 dns-1
lxc network attach lxdbr0 dns-1 eth0 eth0
lxc start dns-1
lxc shell dns-1
dnf install -y dnsmasq
Now let's edit /etc/hosts
Now we create /etc/resolv-custom.conf with the previous value
search hpbx.tel
nameserver 10.17.96.1
and on dnsmasq.conf we set:
resolv-file=/etc/resolv-custom.conf
and restart dnsmasq: sytemctl restart dnsmasq
After editing, we need to setup default nameserver to the dns-1 container's IP
echo -e 'dhcp-option=6,10.17.96.215\ndomain=hpbx.tel' | lxc network set lxdbr0 raw.dnsmasq -
This command is executed on the host level
Now on all host servers we install nebula using snap
snap install nebula
Go to /var/snap/nebula/common/certs to create the certificates (CA and nodes).
nebula.cert-ca -name "Myorganization, Inc"
The other steps can be done following the documentation:
https://github.com/slackhq/nebula (README)
Repeat the node certificate creation steps to the other servers. Do not forget to add -subnet '10.17.96.0/24' to make the network be reachable from the other hosts/containers
Nebula's static host config must point to lighthouse (server1 on this install) and there must be container ip address on unsafe_routes except the local server ones (ie. if you are server1, ony add container routes from server2, server3 and server4)
Also Add iptables rules to each host for the containers to be reachable from all local network.
iptables -t nat -A POSTROUTING -s 10.17.96.0/24 -o nebula1 -j ACCEPT
Edit each container's /etc/rc.local and add:
route add -net 10.17.96.0/24 gw 10.17.96.1
Make rc.local executable with chmod +x /etc/rc.local
Logout from the server and then, login with created tunnel to couch1:
ssh root@<server-1-ip-address> -L 5984:<couch1-ip-address>:5984
Access the url and choose Cluster setup:
http://localhost:5984/
Add each node as FQDN internal hostname on each entry
Go to kz1-z100 and edit haproxy.cfg in the server/listen section
listen bigcouch-data 10.17.96.97:15984
balance roundrobin
server db1.zone1.hpbx.tel couch1.hpbx.tel:5984 check
server db2.zone1.hpbx.tel couch2.hpbx.tel:5984 check
server db3.zone2.hpbx.tel couch3.hpbx.tel:5984 check backup
server db4.zone2.hpbx.tel couch4.hpbx.tel:5984 check backup
systemctl start kazoo-haproxy
systemctl enable kazoo-haproxy
Edit also /etc/kazoo/core/config.ini
[zone]
name = "z100"
amqp_uri = "amqp://guest:[email protected]:5672"
[zone]
name = "z200"
amqp_uri = "amqp://guest:[email protected]:5672"
[bigcouch]
compact_automatically = true
cookie = change_me
ip = "10.17.96.97"
port = 15984
username = "admin"
password = "Y0urPa$$w0rd"
admin_port = 15986
[kazoo_apps]
host = "kz1-z100.hpbx.tel"
zone = "z100"
cookie = COOKIE
[kazoo_apps]
host = "kz1-z200.hpbx.tel"
zone = "z200"
cookie = COOKIE
[ecallmgr]
host = "kz1-z100.hpbx.tel"
zone = "z100"
cookie = COOKIE
[ecallmgr]
host = "kz1-z200.hpbx.tel"
zone = "z200"
cookie = COOKIE
Let's start kazoo-applications and kazoo-ecallmgr
systemctl start kazoo-applications kazoo-ecallmgr
Log out from the container and create snapshot
lxc snapshot kz1-z100
Then create an image for initialize a copy of it on server3
lxc publish --alias kazoo kz1-z100/snap0
lxc init kazoo kz1-z200 --target server3
lxc network attach lxdbr0 kz1-z200 eth0 eth0
Set another ip address to it
lxc config device set kz1-z200 eth0 ipv4.address 10.17.96.98
lxc start kz1-z200 ; lxc shell kz1-z200
hostnamectl set-hostname kz1-z200.hpbx.tel
add the ip route on nebula configs (all except server3) and restart the nebula service
Installing Kamailio Nodes
Initialize the node on server1
lxc init images:almalinux/8 km1-z100 --target server1
lxc network attach lxdbr0 km1-z100 eth0 eth0
lxc shell km1-z100
yum -y install dnf-plugins-core
yum config-manager --add-repo https://rpm.kamailio.org/centos/kamailio.repo
yum --disablerepo="*" enablerepo=kamailio-5.5 install neovim kamailio kamailio-presence kamailio-ldap kamailio-debuginfo kamailio-xmpp kamailio-unixodbc kamailio-utils kamailio-gzcompress kamailio-tls kamailio-outbound kamailio-kazoo kamailio-postgresql git kamailio-uuid
git clone -b 4.3-postgres --depth 1 https://github.com/kageds/kazoo-configs-kamailio /etc/kazoo
cp -v system/sbin/kazoo-kamailio /usr/sbin/
'system/sbin/kazoo-kamailio' -> '/usr/sbin/kazoo-kamailio'
[root@km1-z100 kazoo]# cp -v system/systemd/kazoo-kamailio.service /lib/systemd/system/
'system/systemd/kazoo-kamailio.service' -> '/lib/systemd/system/kazoo-kamailio.service'
yum install -y postgresql-server
postgresql-setup --initdb
systemctl enable --now postgresql
su - postgres
createdb kamailio
psql
CREATE USER kamailio WITH PASSWORD 'kamailio';
GRANT ALL privileges on database kamailio to kamailio;
logout
Change auth mode to password
nano /var/lib/pgsql/data/pg_hba.conf
# "local" is for Unix domain socket connections only
local all all password
# IPv4 local connections:
host all all 127.0.0.1/32 password
# IPv6 local connections:
host all all ::1/128 password
nano /var/lib/pgsql/data/postgresql.conf
shared_buffers = 256MB
max_connections = 500
systemctl restart postgresql
Test kamailio connection
psql -U kamailio -d postgres://kamailio:[email protected]/kamailio
Initialize the kamailio database with all the required tables:
psql -U kamailio -d postgres://kamailio:[email protected]/kamailio -f /etc/kazoo/kamailio/db_scripts/kamailio_initdb_postgres.sql
go to /etc/kazoo/kamailio/local.cfg
nano -w /etc/kazoo/kamailio/local.cfg
Change MY_HOSTNAME to your kamailio hostname
Change MY_IP_ADDRESS to the container's IP Address
Change MY_AMQP_URL to the server1's rabbitmq
amqp://guest:[email protected]:5672
Add another one Called MY_AMQP_SECONDARY_URL to to server3's rabbitmq
#!substdef "!MY_AMQP_SECONDARY_URL!zone=z200;amqp://guest:[email protected]:5672!g"
Add advertise public IP to kamailio at the end of the config
listen=UDP_SIP advertise 178.zzz.zzz.235:5060
listen=TCP_SIP advertise 178.zzz.zzz.235:5060
listen=UDP_ALG_SIP advertise 178.zzz.zzz.235:7000
listen=TCP_ALG_SIP advertise 178.zzz.zzz.235:7000
Now create an image to be copied later on server3
lxc snapshot km1-z100
lxc publish --alias kamailio km1-z100/snap0
lxc init kamailio km1-z200 --target server3
lxc network attach lxdbr0 km1-z200 eth0 eth0
lxc config device set km1-z200 eth0 ipv4.address 10.17.96.188
Now edit again the MY_IP_ADDRESS , MY_HOSTNAME, MY_AMQP_URL and advertise public address to local.cfg
#!substdef "!MY_AMQP_URL!amqp://guest:[email protected]:5672!g"
#!substdef "!MY_AMQP_SECONDARY_URL!zone=z100;amqp://guest:[email protected]:5672!g"
start kamailio:
systemctl start kazoo-kamailio
Logout and login to kz1-z100 on server1
lxc shell kz1-z100
sup crossbar_maintenance create_account ACCOUNT_NAME sip.realm.ltd USERNAME PASSWORD
Clone Monster-UI and its apps
cd /usr/local/src
git clone --depth 1 https://github.com/2600hz/monster-ui
cd ./monster-ui/src/apps
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-voip voip
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-pbxs pbxs
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-voicemails voicemails
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-accounts accounts
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-callflows callflows
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-fax fax
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-numbers numbers
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-webhooks webhooks
git clone --depth 1 -b 4.3 https://github.com/2600hz/monster-ui-csv-onboarding csv-onboarding
cd /usr/local/src/monster-ui
Install NPM:
curl --silent --location https://rpm.nodesource.com/setup_18.x | sudo bash -
dnf install -y nodejs npm
npm install
npm install -g gulp
gulp build-prod
cp -r dist /var/www/monster-ui/
Install nginx web server
dnf install nginx
cd /etc/nginx/conf.d/
vi kazoo.conf
upstream kazoo-app.kazoo {
ip_hash;
server 127.0.0.1:8000; #Kazoo Internal IP address
server 127.0.0.1:8000; #Kazoo z200 Internal IP address
}
upstream kazoo-app-ws.kazoo {
ip_hash;
server 127.0.0.1:5555; #Kazoo Internal IP address
server 127.0.0.1:5555; #Kazoo z200 Internal IP address
}
server {
listen 80 ;
listen [::]:80 ;
listen 443 ssl;
listen [::]:443 ssl;
keepalive_timeout 70;
ssl_certificate /etc/letsencript/live/domain.ltd/fullchain.pem ;
ssl_certificate_key /etc/letsencript/live/domain.ltd/privkey.pem ;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
proxy_read_timeout 6000;
server_name monster-ui;
root /var/www/monster-ui;
if ($ssl_protocol = "") {
rewrite ^https://$server_name$request_uri? permanent;
}
location / {
index index.html;
if ($http_upgrade = "websocket") {
proxy_pass http://kazoo-app-ws.kazoo;
}
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
}
location ~* /v[1-2]/ {
if ($scheme = http) {
rewrite ^https://$server_name$request_uri? permanent;
return 301 https://$server_name$request_uri;
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://kazoo-app.kazoo;
}
### Forward to certbot server
location /.well-known {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-SSL on;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://169.254.254.254;
}
}
Save an exit.
Lets import the Certificates using Certbot
dnf install -y certbot
logout to main server1 and create network forward to allow 80 and 443
lxc network forward create lxdbr0 178.128.153.235
lxc network forward port add lxdbr0 178.128.153.235 tcp 80,443 10.17.96.97
Login back to kz1-z100 container
lxc shell kz1-z100
certbot certonly -d <FQDN> --standalone
Complete the dialog and then start nginx
systemctl start nginx
Go to /var/www/monster-ui/js and edit config.js
define({
whitelabel: {
companyName: '2600Hz',
applicationTitle: 'Monster UI',
callReportEmail: '[email protected]',
nav: {
help: 'http://wiki.2600hz.com'
},
port: {
loa: 'http://ui.zswitch.net/Editable.LOA.Form.pdf',
resporg: 'http://ui.zswitch.net/Editable.Resporg.Form.pdf'
}
},
api:{
default: 'https://portal.domain.ltd/v2/' ,
socket: 'wss://portal.kioskovirtual.net/'
}
});
Go and browse FQDN for test Monster-UI
Go back to kz1-z100 and import the apps:
sup crossbar_maintenance init_apps /var/www/monster-ui/apps/ https://portal.domain.ltd/v2/
Refresh the browser and activate the desired apps on App Exchange
Go to kz1-z100 and Import the kazoo sounds:
cd /usr/local/src/
git clone --depth 1 -b 4.3 https://github.com/2600hz/kazoo-sounds
sup kazoo_media_maintenance import_prompts /usr/local/src/kazoo-sounds/kazoo-core/en/us/ en-us
Add the freeswitch node to each ecallmgr, one per zone
On kz1-z100:
sup -n ecallmgr ecallmgr_maintenance add_fs_node [email protected] 'false'
On kz1-z200:
sup -n ecallmgr ecallmgr_maintenance add_fs_node [email protected] 'false'
Now head to server1 as host and forward kamailio ports
# lxc network forward port add lxdbr0 178.zzz.zzz.235 tcp 5060,7000,7001,5061 10.17.96.187
# lxc network forward port add lxdbr0 178.zzz.zzz.235 udp 5060,7000 10.17.96.187
On server2:
lxc network forward create lxdbr0 178.128.150.82
lxc network forward port add lxdbr0 178.zzz.zzz.82 udp 16384-32768 10.17.96.162
On server 3:
lxc network forward create lxdbr0 164.zzz.zzz.7
[root@almalinux-s-1vcpu-2gb-fra1-01 ~]# lxc network forward port add lxdbr0 164.zzz.zzz.7 tcp 5060,7000,5061,7001 10.17.96.188
[root@almalinux-s-1vcpu-2gb-fra1-01 ~]# lxc network forward port add lxdbr0 164.zzz.zzz.7 udp 5060,7000 10.17.96.188
[root@almalinux-s-1vcpu-2gb-fra1-01 ~]# lxc network forward port add lxdbr0 164.zzz.zzz.7 udp 5060,7000 10.17.96.188
On server 4:
lxc network forward create lxdbr0 164.zzz.zzz.159
lxc network forward port add lxdbr0 164.zzz.zzz.159 udp 16384-32768 10.17.96.163
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment