First I created 3 droplets on digital ocean with 4-cores and 8GB of RAM. Login as root to each and run:
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
sudo dpkg -i erlang-solutions_1.0_all.deb
yes | sudo apt-get update
yes | sudo apt-get install elixir esl-erlang build-essential git gnuplot libtemplate-perl htop
echo "root soft nofile 4000000" >> /etc/security/limits.conf
echo "root hard nofile 4000000" >> /etc/security/limits.conf
Then I copied and compiled the application with:
rsync -avz --exclude _build . [email protected]:~/brokaw
ssh [email protected]
cd ~/brokaw
MIX_ENV=prod mix deps.compile
MIX_ENV=prod mix compile
# give me all the ulimits
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
# start the app
vim config.prod.exs # modify the check_hosts to include http://138.68.250.167
PORT=4000 MIX_ENV=prod iex --name [email protected] --cookie watwat -S mix phoenix.server
From whichever node booted last I would join the cluster like this:
iex> Node.ping(:"[email protected]")
:pong
iex> Node.ping(:"[email protected]")
:pong
iex> Node.list()
[..]
First I created 3 droplets on digital ocean with 4-cores and 8GB of RAM.
Note: whichever node acted as the tsung coordinator ended up maxing out its CPU, so it's probably advisable to use something bigger next time Login as root to each and run:
curl -sSL https://agent.digitalocean.com/install.sh | sh # install digital ocean metric tracker
wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
sudo dpkg -i erlang-solutions_1.0_all.deb
yes | sudo apt-get update
yes | sudo apt-get install elixir esl-erlang build-essential git gnuplot libtemplate-perl htop
wget http://tsung.erlang-projects.org/dist/tsung-1.6.0.tar.gz
tar -xvf tsung-1.6.0.tar.gz
cd tsung-1.6.0/
./configure
make
sudo make install
cd ..
sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
sysctl -w net.ipv4.tcp_rmem='1024 4096 16384'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16384'
sysctl -w net.core.rmem_max=16384
sysctl -w net.core.wmem_max=16384
echo "root soft nofile 4000000" >> /etc/security/limits.conf
echo "root hard nofile 4000000" >> /etc/security/limits.conf
vim /etc/hosts # add entries for tsung1, tsung2 and tsung3 with the IPs we were assigned so the nodes can find each other
Then I created the brokaw.xml
file on one of those tsung nodes and started the bechmark run with:
tsung -k -f brokaw.xml start
1 server 4-cores, 8GB RAM on Digital Ocean, 1 load test machine in DO
num users | check for online user (µs) | check for offline user (µs) |
---|---|---|
10 | 6.52 | 6.71 |
100 | 14.44 | 9.74 |
1000 | 10.78 | 13.14 |
3000 | 34.52 | 33.36 |
10000 | 31 | 28.82 |
20000 | 31.89 | 28.81 |
40000 | 32.33 | 34.51 |
55000 | 48.65 | 34.88 |
Rate at which new users could connect
I was attempting to do 1k/sec
Total Connected Websockets
Setup details for this benchmark can be found here During the test the brokaw nodes never used more than 50% of their CPU, but the load testing client boxes got maxed out
num users | check for online user (µs) | check for offline user (µs) | memory used (per node) |
---|---|---|---|
30 | 10.39 | 9.87 | 200MB |
1000 | 12.07 | 12.81 | 215MB |
10000 | 42.43 | 42.23 | 400MB |
50000 | 76.94 | 15.37 | 1GB |
100000 | 15.18 | 15.64 | 1.8GB |
150000 | 17.03 | 15.35 | 2.2GB |