I have a need to create a server farm that can handle 5+ million connections, 5+ million topics (one per client), process 300k messages/sec.
I tried to see what various
ANSWER: While doing this I realized that I had a misspelling in my client setting within /etc/sysctl.conf file for: net.ipv4.ip_local_port_range
I am now able to connect 956,591 MQTT clients to my Apollo server in 188sec.
More info: Trying to isolate if this is an O/S connection limitation or a Broker, I decided to write a simple Client/Server.
The server:
Socket client = null;
server = new ServerSocket(1884);
while (true) {
client = server.accept();
clients.add(client);
}
The Client:
while (true) {
InetAddress clientIPToBindTo = getNextClientVIP();
Socket client = new Socket(hostname, 1884, clientIPToBindTo, 0);
clients.add(client);
}
With 21 IPs, I would expect 65535-1024*21 = 1354731 to be the boundary. In reality I am able to achieve 1231734
[root@ip ec2-user]# cat /proc/net/sockstat
sockets: used 1231734
TCP: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2
UDP: inuse 4 mem 1
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
So the socket/kernel/io stuff is worked out.
I am STILL unable to achieve this using any broker.
Again just after my client/server test this is the kernel settings.
Client:
[root@ip ec2-user]# sysctl -p
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 5242880 5242880 15242880
net.ipv4.tcp_tw_recycle = 1
fs.file-max = 20000000
fs.nr_open = 20000000
[root@ip ec2-user]# cat /etc/security/limits.conf
* soft nofile 2000000
* hard nofile 2000000
root soft nofile 2000000
root hard nofile 2000000
Server:
[root@ ec2-user]# sysctl -p
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 5242880 5242880 5242880
net.ipv4.tcp_tw_recycle = 1
fs.file-max = 20000000
fs.nr_open = 20000000
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_syn_backlog = 1000000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 1000000
net.core.optmem_max = 20480000