Similar benchmarks done by other people:
-
http://www.scalagent.com/IMG/pdf/Benchmark_MQTT_servers-v1-1.pdf
-
http://rexpie.github.io/2015/08/23/stress-testing-mosquitto.html
-
Maximum number of concurrent connections
-
Transmission latency with increased concurrent connections
-
Message loss (sent using QoS 2) with increased concurrent connections
-
Transmission latency with increased size of payload
-
Message loss (sent using QoS 2) with increased size of payload
Default installations for EMQ, and VerneMQ were pretty non-performant. Used pointers from EMQTT docs, and VerneMQ docs . Kernel variables that might need tuning:
- fs.file-max: Denotes the maximum number of file-handlers that the kernel will allocate
- fs.nr_open: Denotes the maximum number of file-hadlers a process can allocate
- net.core.somaxconn: Need to do more reading on this but related to network connection backlog
- net.ipv4.tcp_max_syn_backlog: Need to do more reading on this but related to network connection backlog
- net.core.netdev_max_backlog: Need to do more reading on this but related to network connection backlog
- net.core.rmem_default: Related to read/write buffers for TCP
- net.core.wmem_default:
- net.core.rmem_max:
- net.core.wmem_max:
- net.core.optmem_max:
- net.ipv4.tcp_rmem:
- net.ipv4.tcp_wmem:
- net.nf_conntrack_max:
- net.netfilter.nf_conntrack_max:
- net.netfilter.nf_conntrack_tcp_timeout_time_wait:
- net.ipv4.tcp_max_tw_buckets:
- net.ipv4.tcp_fin_timeout:
Both VerneMQ, and EMQ recommend tuning the Erlang VM. **** document changes made here
I also tuned a bit the client performing the benchmark tests:
sysctl -w net.ipv4.ip_local_port_range="500 65535"
sysctl -w fs.file-max=1000000
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/chap-Oracle_9i_and_10g_Tuning_Guide-Setting_File_Handles.html To check the max number of file handlers:
cat /proc/sys/fs/file-max
To check current number of open file handlers (three values; total allocated file handlers, number of currently used handlers, maximum file handlers):
cat /proc/sys/fs/file-nr
To change the systems max open file handers while system is still running:
sysctl -w fs.file-max=2097152
To make changes permanent for the above kernel variable:
echo fs.file-max=2097152 >> /etc/sysctl.conf
EMQ installed and all variables set to default. Only changes to the Kernel was to bump up values for fs.file-max=2097152
, and fs.nr_open=2097152
since these where the only Kernel variables I was initially aware that needed changing. For the Erlang VM; node.process_limit = 2097152
, node.max_ports = 1048576
, and for the connection I was using to test (TCP)
listener.tcp.external.acceptors = 64
listener.tcp.external.max_clients = 1000000
After duing the file handler tuning, tried connecting 1000 clients to listen to one topic.
./emqtt_bench_sub -h 192.168.1.105 -p 1883 -c 1000 -q 2 -C false -t test_topic
Ran into the following error when I try to connect more than 992 concurrent connections.
Acceptor on 0.0.0.0:1883 suspend 100ms for 100 emfile errors
Might have been because I was running at 100pc CPU. Bumping up the number of processors from 1 to 2 made no difference. Hmm, did further reading; poor performance might be because I didn't tune the Erlang VM (FML).