Skip to content

Instantly share code, notes, and snippets.

@lukebakken
Last active May 11, 2026 16:05
Show Gist options
  • Select an option

  • Save lukebakken/e83c568640d450be926db8a20d0fa3c5 to your computer and use it in GitHub Desktop.

Select an option

Save lukebakken/e83c568640d450be926db8a20d0fa3c5 to your computer and use it in GitHub Desktop.

Issue 16347 - per-protocol connection limits implementation plan

Context

GitHub issue: rabbitmq/rabbitmq-server#16347

The issue is a review of what remains to be done for per-protocol connection limits. The stream plugin already has stream.max_connections and the web_mqtt plugin already has web_mqtt.max_connections. This document covers the implementation of mqtt.max_connections (PR #16367, under revision) and stomp.max_connections (PR #16368, open).

Status of PR #16367: The initial implementation used ranch:info(RanchRef) (same as the stream plugin). Reviewer @ansd identified that this gives a per-listener count, not a node-wide count. MQTT supports port-to-vhost mapping (multiple listeners on different ports), dual-stack (separate IPv4/IPv6 listeners), and mixed TCP/TLS deployments — each of which is a separate Ranch ref with its own supervisor. With four listeners and max_connections = 1000, the actual node limit would be 4000. The fix is to use the MQTT PG scope ETS table, described below.

How AMQP 0-9-1 handles connection limits (background)

AMQP 0-9-1 has two independent connection limit mechanisms. Understanding both is important context for the MQTT approach.

ranch_connection_max - Ranch transport-level

Set via ranch_connection_max in rabbitmq.conf. Applied in tcp_listener_sup.erl:38 as the max_connections field in the Ranch listener options (divided by num_conns_sups).

When the limit is reached, ranch_conns_sup does not send the resumption message to the acceptor process after handing off a connection (ranch_conns_sup.erl:264-268). The acceptor blocks inside start_protocol/3 and stops calling Transport:accept/2. The kernel continues completing TCP three-way handshakes and placing sockets in the OS accept queue (sized by the backlog option, default 128). Once that queue is full, the kernel on Linux silently drops new SYN packets. Clients experience a TCP connection timeout with no RST. The TCP listen queue is directly affected.

When a connection closes, ranch_conns_sup wakes a sleeping acceptor and accepting resumes.

connection_max - Application-level in rabbit_reader

Set via connection_max in rabbitmq.conf. Checked in rabbit_reader.erl:is_over_node_connection_limit/1 (line 1416), called from handle_method0 when connection.open is received (line 1318). The implementation calls ranch:info(RanchRef) and compares active_connections against the limit. If over the limit, rabbit_misc:protocol_error(not_allowed, ...) is raised, sending a connection.close frame to the client before closing. The TCP listen queue is not affected; Ranch keeps accepting normally.

Relevance to MQTT

mqtt.max_connections corresponds to the connection_max pattern - an application-level check in the connection handler, not a Ranch transport-level gate. The stream plugin uses ranch:info(RanchRef) for this, but that approach is incorrect for MQTT because MQTT supports multi-listener configurations (port-to-vhost mapping, dual-stack, TCP+TLS) where each listener has a separate Ranch ref. Instead, MQTT's existing PG scope is used for a true node-wide count (see enforcement approach below).

What already exists

  • stream.max_connections - enforced in rabbit_stream_reader via ranch:info/1 after Ranch accepts the connection, during the OPEN frame
  • web_mqtt.max_connections - enforced at the Ranch/Cowboy transport layer at listener startup time
  • rabbit_mqtt_processor already enforces check_vhost_connection_limit/1 and check_user_connection_limit/1 during CONNECT packet processing

How MQTT starts its listeners

MQTT uses rabbit_mqtt_sup -> rabbit_networking:tcp_listener_spec/10 -> tcp_listener_sup. The key point is that tcp_listener_sup passes max_connections to Ranch from rabbit.ranch_connection_max (the global node limit), not from a per-plugin setting.

Web MQTT bypasses tcp_listener_sup entirely and starts its Cowboy listeners directly, passing max_connections from application:get_env(rabbitmq_web_mqtt, max_connections, infinity). That is why web_mqtt can enforce a plugin-specific limit at the Ranch level without touching tcp_listener_sup.

Since MQTT uses tcp_listener_sup (shared infrastructure), adding a per-plugin max_connections at the Ranch transport layer would require modifying tcp_listener_sup, which affects every protocol that uses it. That approach is out of scope. Instead, we check the limit in the connection handler after Ranch has accepted the connection.

Enforcement approach: PG scope ETS count

When Ranch accepts a connection, it starts a rabbit_mqtt_reader process. That process calls rabbit_networking:handshake/2 and then waits for the CONNECT packet. When the CONNECT packet arrives, rabbit_mqtt_processor:init/5 is called, which calls process_connect/5. The first check in the maybe chain inside process_connect/5 is check_node_connection_limit().

check_node_connection_limit/0 uses ets:info(persistent_term:get(?PG_SCOPE), size) to get the number of active MQTT connections node-wide, then compares against application:get_env(rabbitmq_mqtt, max_connections, infinity).

Why the PG scope gives a correct node-wide count

The MQTT plugin creates a node-local PG scope (rabbit:pg_local_scope(?PG_SCOPE)) in rabbit_mqtt_sup. Each MQTT connection registers in this scope by calling pg:join(PgScope, {VHost, ClientId}, self()) inside register_client_id/4 (line 721 of rabbit_mqtt_processor.erl). The MQTT spec requires unique ClientIDs per node, which RabbitMQ enforces, so each {VHost, ClientId} group has exactly one member. The OTP pg module stores one ETS row per group ({Group, AllPids, LocalPids}), so ets:info(PgScope, size) equals the number of active MQTT connections on the node.

This count is:

  • Node-wide: the PG scope is local to the node; it is not split by listener, IP family, or transport
  • O(1): a single ETS metadata lookup
  • Inclusive of all transports: plain TCP, TLS, and Web MQTT connections all call register_client_id and join the same PG scope

Comparison: >= Limit not > Limit

register_client_id is called at line 214 of process_connect, which is after the check_node_connection_limit() call at line 197. The current connection is therefore not yet in the PG scope when the check runs. The correct comparison is ActiveConns >= Limit: if the count is already at the limit, the new connection must be rejected (it would become the (Limit+1)-th).

This differs from the Ranch approach, where ranch:handshake/1 has already completed and the connection is already counted in active_connections, making > Limit correct for that case.

Web MQTT is included

Web MQTT connections call rabbit_mqtt_processor:init/4 (which delegates to init/5 with RanchRef = undefined). They also call register_client_id on successful CONNECT, joining the same PG scope. With the PG scope approach, Web MQTT connections are counted in the node-wide total and are subject to mqtt.max_connections. This is the correct and intended behaviour — mqtt.max_connections is a node-wide limit for all MQTT connections regardless of transport.

rabbit_mqtt_processor:init/4 — no init/5 needed

rabbit_mqtt_processor:init/4 is in the -export list and is called by two places:

  1. deps/rabbitmq_mqtt/src/rabbit_mqtt_reader.erl — plain MQTT
  2. deps/rabbitmq_web_mqtt/src/rabbit_web_mqtt_handler.erl:385 — Web MQTT

Since check_node_connection_limit/0 takes no argument, RanchRef is not needed anywhere in the connect flow. init/5 is therefore unnecessary. All changes made to rabbit_mqtt_reader.erl in the enforcement commit (adding ranch_ref to #state{}, storing it in init/1, binding it in process_received_bytes/2, and passing it to init/5) can be fully reverted. Both callers use init/4 as before.

Files to change

Already committed — schema and test (no further changes needed)

deps/rabbitmq_mqtt/priv/schema/rabbitmq_mqtt.schema — committed in "Add mqtt.max_connections cuttlefish schema mapping":

{mapping, "mqtt.max_connections", "rabbitmq_mqtt.max_connections",
    [{datatype, [{atom, infinity}, integer]}, {validators, ["non_negative_integer"]}]}.

{translation, "rabbitmq_mqtt.max_connections",
fun(Conf) ->
    case cuttlefish:conf_get("mqtt.max_connections", Conf, undefined) of
        undefined -> cuttlefish:unset();
        infinity  -> infinity;
        Val when is_integer(Val) -> Val;
        _         -> cuttlefish:invalid("should be a non-negative integer")
    end
end}.

deps/rabbitmq_mqtt/test/config_schema_SUITE_data/rabbitmq_mqtt.snippets — committed in same commit.

deps/rabbitmq_mqtt/test/auth_SUITE.erl — committed in "Add node_connection_limit test to auth_SUITE". Test behaviour is unchanged by the counting mechanism switch: with max_connections = 0, both approaches reject the first connection attempt (Ranch: 1 > 0; PG scope: 0 >= 0). No further changes needed.

Requires change: deps/rabbitmq_mqtt/src/rabbit_mqtt_processor.erl

Remove init/5 from the -export list. init/4 and init/5 are both currently exported; remove init/5.

Restore init/4 to its original full body — revert the wrapper introduced in the enforcement commit. init/4 calls process_connect/5 directly (not via init/5):

init(#mqtt_packet{fixed = #mqtt_packet_fixed{type = ?CONNECT},
                  variable = ConnectPacket},
     Socket, ConnName, SendFun) ->
    case rabbit_net:socket_ends(Socket, inbound) of
        {ok, SocketEnds} ->
            process_connect(ConnectPacket, Socket, ConnName, SendFun, SocketEnds);
        {error, Reason} ->
            {error, {socket_ends, Reason}}
    end.

Remove init/5 entirely — the clause added in the enforcement commit is deleted.

process_connect/6process_connect/5 — remove RanchRef from the function head; it is no longer used.

Call site — change ok ?= check_node_connection_limit(RanchRef) to ok ?= check_node_connection_limit().

Replace check_node_connection_limit/1 with check_node_connection_limit/0:

Remove both clauses of the old check_node_connection_limit/1 and replace with:

check_node_connection_limit() ->
    case application:get_env(rabbitmq_mqtt, max_connections, infinity) of
        infinity ->
            ok;
        Limit when is_integer(Limit), Limit >= 0 ->
            PgScope = persistent_term:get(?PG_SCOPE),
            case ets:info(PgScope, size) of
                ActiveConns when is_integer(ActiveConns), ActiveConns >= Limit ->
                    ?LOG_ERROR("MQTT connection failed: node connection limit ~tp is reached",
                               [Limit]),
                    {error, ?RC_QUOTA_EXCEEDED};
                _ ->
                    ok
            end;
        _ ->
            ok
    end.

?PG_SCOPE is already used in rabbit_mqtt_processor.erl (in register_client_id/4 and remove_duplicate_client_id_connections/3), so the macro is already in scope. ets:info/2 returns undefined if the table does not exist; the is_integer(ActiveConns) guard catches this and falls through to _ -> ok (fail open), though in practice the PG scope is always running when connections are being processed.

Requires revert: deps/rabbitmq_mqtt/src/rabbit_mqtt_reader.erl

All three changes from the enforcement commit must be reverted:

  1. Remove ranch_ref :: ranch:ref() from #state{}
  2. Remove ranch_ref = Ref from init/1
  3. Remove ranch_ref = RanchRef from the process_received_bytes/2 pattern match
  4. Revert the init/5 call back to init/4 (removing the RanchRef argument)

What is NOT changed

  • deps/rabbitmq_web_mqtt/src/rabbit_web_mqtt_handler.erl — keeps calling init/4, no changes needed
  • deps/rabbit/src/tcp_listener_sup.erl — not modified; per-plugin limits do not go through shared infrastructure
  • deps/rabbit/src/rabbit_networking.erl — not modified

Verification notes

  • The PG scope ETS table has one row per {VHost, ClientId} group. Since MQTT enforces unique ClientIDs per vhost (enforced by remove_duplicate_client_id_connections/3), each group has exactly one member. ets:info(PgScope, size) therefore equals the number of active MQTT connections on the node.
  • The check runs at line 197, before register_client_id at line 214. The current connection is not yet in the PG scope. Comparison is >= Limit.
  • With max_connections = 0: PG size is 0 on the first attempt; 0 >= 0 is true; the connection is rejected. This matches the test behaviour.
  • process_connect/1 (arity 1, handles session/subscription setup after auth) is a separate function and is not affected.
  • The non_negative_integer validator is defined in the core rabbit.schema and is available to all plugin schemas.

Part 2: stomp.max_connections (branch feature/gh-16347-stomp-connections-max)

STOMP architecture differences from MQTT

initial_state/2 pattern instead of init/4

STOMP initializes the processor with rabbit_stomp_processor:initial_state/2, called from two places:

  1. deps/rabbitmq_stomp/src/rabbit_stomp_reader.erl:83 - plain STOMP (has Ref)
  2. deps/rabbitmq_web_stomp/src/rabbit_web_stomp_handler.erl:215 - Web STOMP (no Ref)

Parallel to the MQTT approach: keep initial_state/2 as a wrapper that calls initial_state/3 with undefined, so Web STOMP needs no change. The plain STOMP reader calls initial_state/3 with Ref.

register_non_amqp_connection timing

In rabbit_stomp_reader.erl:101, rabbit_networking:register_non_amqp_connection(self()) is called during init/1, before any CONNECT frame arrives. This is unlike MQTT where registration happens after auth. This has no effect on the limit check because we use ranch:info(RanchRef) not the non-AMQP connection registry.

Error handling pattern

process_connect/3 in rabbit_stomp_processor.erl uses a maybe chain (lines 329-407). Error cases are handled in an else block using the error/3 helper, which returns {error, Message, Detail, State}. process_request/2 catches this and calls send_error/3 to transmit the ERROR frame.

Existing precedent: check_vhost_connection_limit/1 returns {error, quota_exceeded}, which the else block maps to a "Bad CONNECT" ERROR frame. We use a distinct atom node_connection_limit_exceeded to produce a distinct error message.

Ranch dep

ranch is listed in deps/rabbitmq_stomp/Makefile:33 (DEPS = ranch rabbit_common rabbit).

Note on the Ranch-based approach for STOMP

STOMP does not have port-to-vhost mapping or the equivalent multi-listener configurations that make ranch:info(RanchRef) incorrect for MQTT. The Ranch-based approach is therefore acceptable for STOMP. If STOMP gains similar multi-listener configurations in the future, a PG-scope-style fix would be needed, but that is not a concern today.

Files to change

Commit 1 - Schema

deps/rabbitmq_stomp/priv/schema/rabbitmq_stomp.schema

Add after the stomp.num_acceptors.tcp mapping (after line 158):

{mapping, "stomp.max_connections", "rabbitmq_stomp.max_connections",
    [{datatype, [{atom, infinity}, integer]}, {validators, ["non_negative_integer"]}]}.

{translation, "rabbitmq_stomp.max_connections",
fun(Conf) ->
    case cuttlefish:conf_get("stomp.max_connections", Conf, undefined) of
        undefined -> cuttlefish:unset();
        infinity  -> infinity;
        Val when is_integer(Val) -> Val;
        _         -> cuttlefish:invalid("should be a non-negative integer")
    end
end}.

deps/rabbitmq_stomp/test/config_schema_SUITE_data/rabbitmq_stomp.snippets

The file ends with {max_frame_size_unauthenticated, ...} (no trailing comma) followed by ].. Add a comma to that entry and append:

 {max_connections,
  "stomp.max_connections = 10",
  [{rabbitmq_stomp,[{max_connections, 10}]}],
  [rabbitmq_stomp]}

Commit 2 - Enforcement

deps/rabbitmq_stomp/src/rabbit_stomp_processor.erl

  1. Add ranch_ref :: ranch:ref() | undefined to #cfg{} record.
  2. Add initial_state/3 to the -export list.
  3. Change initial_state/2 to a one-line wrapper calling initial_state/3 with undefined.
  4. Add initial_state/3 - same body as current initial_state/2 but stores RanchRef in #cfg{ranch_ref = RanchRef}.
  5. In process_connect/3, bind ranch_ref = RanchRef in the function head pattern match alongside the existing conn_info and ssl_login_name bindings, then add as the first maybe check: ok ?= check_node_connection_limit(RanchRef).
  6. Add else clause: {error, node_connection_limit_exceeded} -> error("Bad CONNECT", "Connection refused: node connection limit reached", State).
  7. Add check_node_connection_limit/1 near check_vhost_connection_limit/1:
check_node_connection_limit(undefined) ->
    ok;
check_node_connection_limit(RanchRef) ->
    case application:get_env(rabbitmq_stomp, max_connections, infinity) of
        infinity ->
            ok;
        Limit when is_integer(Limit), Limit >= 0 ->
            #{active_connections := ActiveConns} = ranch:info(RanchRef),
            case ActiveConns > Limit of
                false ->
                    ok;
                true ->
                    ?LOG_ERROR("STOMP connection failed: node connection limit ~tp is reached",
                               [Limit]),
                    {error, node_connection_limit_exceeded}
            end;
        _ ->
            ok
    end.

deps/rabbitmq_stomp/src/rabbit_stomp_reader.erl

Change line 83-84 from rabbit_stomp_processor:initial_state(Configuration, ProcInitArgs) to rabbit_stomp_processor:initial_state(Configuration, ProcInitArgs, Ref). Ref is already in scope from line 64.

Commit 3 - Integration test

deps/rabbitmq_stomp/test/connections_SUITE.erl

Add node_connection_limit to all/0. Add test function:

node_connection_limit(Config) ->
    rabbit_ct_broker_helpers:rpc(Config, 0, application, set_env,
                                 [rabbitmq_stomp, max_connections, 0]),
    StompPort = get_stomp_port(Config),
    {ok, Sock} = gen_tcp:connect(localhost, StompPort, [{active, false}, binary]),
    ConnectFrame = <<"CONNECT\nlogin:guest\npasscode:guest\naccept-version:1.2\n\n\0">>,
    ok = gen_tcp:send(Sock, ConnectFrame),
    {ok, Data} = gen_tcp:recv(Sock, 0, 5000),
    {ok, Frame, _} = rabbit_stomp_frame:parse(Data, rabbit_stomp_frame:initial_state()),
    'ERROR' = Frame#stomp_frame.command,
    gen_tcp:close(Sock),
    rabbit_ct_broker_helpers:rpc(Config, 0, application, set_env,
                                 [rabbitmq_stomp, max_connections, infinity]).

What is NOT changed

  • deps/rabbitmq_web_stomp/src/rabbit_web_stomp_handler.erl - keeps calling initial_state/2
  • deps/rabbit/src/tcp_listener_sup.erl - not modified
  • deps/rabbitmq_stomp/test/config_schema_SUITE.erl - snippets file drives this test automatically

Verification notes

  • ranch:info(RanchRef) counts the current connection in active_connections because ranch:handshake/1 (called inside rabbit_networking:handshake/2) has completed by the time process_connect/3 runs
  • check_node_connection_limit(undefined) returns ok immediately, ensuring Web STOMP is unaffected
  • The non_negative_integer validator from rabbit.schema is globally available

Issue 16347 - stomp.max_connections implementation plan

Context

The MQTT implementation is complete (PR #16367). This document covers the STOMP plugin.

Background on approach is in rabbitmq-server-16347.md.

Approach: per-Ranch-ref via ranch:info/1

The correct approach for STOMP is ranch:info(RanchRef)active_connections, matching the established RabbitMQ pattern used by every other protocol:

Protocol Reader Mechanism
AMQP 0.9.1 rabbit_reader.erl ranch:info(RanchRef)
AMQP 1.0 rabbit_amqp_reader.erl ranch:info(RanchRef)
Stream rabbit_stream_reader.erl ranch:info(RanchRef)
STOMP rabbit_stomp_processor.erl ranch:info(RanchRef)

MQTT is the exception: it was changed to use the PG scope ETS table because MQTT explicitly supports port-to-vhost mapping (a documented feature that creates multiple Ranch refs for a single logical MQTT service), and @ansd requested the change during code review of PR #16367. STOMP has no port-to-vhost mapping, so the per-Ranch-ref approach is correct.

> Limit comparison — correct

Ranch counts the connection process from the moment it is accepted, before the STOMP CONNECT frame arrives. So the current connection IS already included in active_connections when the check runs. > Limit is correct and consistent with all other protocols.

With Limit = 0: ActiveConns ≥ 1> 0 = true → reject ✓ With Limit = 1: first connection ActiveConns = 1 > 1? No → allow ✓; second connection ActiveConns = 2 > 1? Yes → reject ✓

Web STOMP exclusion — intentional, matches AMQP 1.0 and Stream pattern

rabbit_web_stomp_handler calls initial_state/2, which delegates to initial_state/3 with undefined. check_node_connection_limit(undefined) -> ok bypasses the check. This is identical to how AMQP 1.0 and Stream handle WebSocket/non-Ranch connections.

Log and error message — consistent with other protocols

All protocols (AMQP 0.9.1, AMQP 1.0, Stream) use "node connection limit" in their log and error messages for the same per-Ranch-ref check. The STOMP message is consistent.

STOMP architecture

initial_state/2 wrapper → initial_state/3

rabbit_stomp_processor:initial_state/2 is called from two places:

  1. deps/rabbitmq_stomp/src/rabbit_stomp_reader.erl:83 — plain STOMP (has Ref)
  2. deps/rabbitmq_web_stomp/src/rabbit_web_stomp_handler.erl:215 — Web STOMP (no Ref)

Keep initial_state/2 as a one-line wrapper that calls initial_state/3 with undefined, so Web STOMP requires no change. The plain STOMP reader calls initial_state/3 with Ref.

Error handling pattern

process_connect/3 in rabbit_stomp_processor.erl uses a maybe chain. Error cases are handled in an else block using the error/3 helper. We use the distinct atom node_connection_limit_exceeded to produce a distinct error message, following the existing quota_exceeded precedent for vhost limits.

Ranch dep

ranch is listed in deps/rabbitmq_stomp/Makefile:33 (DEPS = ranch rabbit_common rabbit).

Files to change

Commit 1 - Schema

deps/rabbitmq_stomp/priv/schema/rabbitmq_stomp.schema

Add after the stomp.num_acceptors.tcp mapping:

{mapping, "stomp.max_connections", "rabbitmq_stomp.max_connections",
    [{datatype, [{atom, infinity}, integer]}, {validators, ["non_negative_integer"]}]}.

{translation, "rabbitmq_stomp.max_connections",
fun(Conf) ->
    case cuttlefish:conf_get("stomp.max_connections", Conf, undefined) of
        undefined -> cuttlefish:unset();
        infinity  -> infinity;
        Val when is_integer(Val) -> Val;
        _         -> cuttlefish:invalid("should be a non-negative integer")
    end
end}.

deps/rabbitmq_stomp/test/config_schema_SUITE_data/rabbitmq_stomp.snippets

Add a comma to the last entry and append:

 {max_connections,
  "stomp.max_connections = 10",
  [{rabbitmq_stomp,[{max_connections, 10}]}],
  [rabbitmq_stomp]}

Commit 2 - Enforcement

deps/rabbitmq_stomp/src/rabbit_stomp_processor.erl

#cfg{} record: Add ranch_ref :: ranch:ref() | undefined field.

Exports: Add initial_state/3 to the -export list.

initial_state/2: Change to a one-line wrapper:

initial_state(Configuration, ProcInitArgs) ->
    initial_state(Configuration, ProcInitArgs, undefined).

initial_state/3 (new): Accepts RanchRef as third argument, same body as current initial_state/2 but stores RanchRef in #cfg{}.

process_connect/3 - function head: Add ranch_ref = RanchRef to the #cfg{} pattern match alongside conn_info and ssl_login_name.

process_connect/3 - maybe chain: Add check_node_connection_limit as the first check (before negotiate_version):

Res1 = maybe
    ok ?= check_node_connection_limit(RanchRef),
    {ok, Version} ?= negotiate_version(Frame),
    ...

process_connect/3 - else block: Add a clause for node_connection_limit_exceeded:

{error, node_connection_limit_exceeded} ->
    error("Bad CONNECT",
          "Connection refused: node connection limit reached",
          State)

check_node_connection_limit/1 (new): Add near check_vhost_connection_limit/1:

check_node_connection_limit(undefined) ->
    ok;
check_node_connection_limit(RanchRef) ->
    case application:get_env(rabbitmq_stomp, max_connections, infinity) of
        infinity ->
            ok;
        Limit when is_integer(Limit), Limit >= 0 ->
            #{active_connections := ActiveConns} = ranch:info(RanchRef),
            case ActiveConns > Limit of
                false ->
                    ok;
                true ->
                    ?LOG_ERROR("STOMP connection failed: node connection limit ~tp is reached",
                               [Limit]),
                    {error, node_connection_limit_exceeded}
            end;
        _ ->
            ok
    end.

deps/rabbitmq_stomp/src/rabbit_stomp_reader.erl

Change initial_state/2 call to initial_state/3, passing Ref:

ProcState = rabbit_stomp_processor:initial_state(
              Configuration, ProcInitArgs, Ref),

Ref is already in scope as the second element of the [SupHelperPid, Ref, Configuration] arg list.

Commit 3 - Integration test

deps/rabbitmq_stomp/test/connections_SUITE.erl

all/0: Add node_connection_limit to the list.

node_connection_limit/1:

node_connection_limit(Config) ->
    rabbit_ct_broker_helpers:rpc(Config, 0, application, set_env,
                                 [rabbitmq_stomp, max_connections, 0]),
    StompPort = get_stomp_port(Config),
    {ok, Sock} = gen_tcp:connect(localhost, StompPort, [{active, false}, binary]),
    try
        ConnectFrame = <<"CONNECT\nlogin:guest\npasscode:guest\naccept-version:1.2\n\n\0">>,
        ok = gen_tcp:send(Sock, ConnectFrame),
        {ok, Data} = gen_tcp:recv(Sock, 0, 5000),
        {ok, Frame, _} = rabbit_stomp_frame:parse(Data, rabbit_stomp_frame:initial_state()),
        'ERROR' = Frame#stomp_frame.command
    after
        gen_tcp:close(Sock),
        rabbit_ct_broker_helpers:rpc(Config, 0, application, set_env,
                                     [rabbitmq_stomp, max_connections, infinity])
    end.

The limit is set to 0 so ActiveConns = 1 > 0 at check time, causing the first connection to receive an ERROR frame. The try/after ensures the limit is reset even if the test fails.

What is NOT changed

  • deps/rabbitmq_web_stomp/src/rabbit_web_stomp_handler.erl — keeps calling initial_state/2, no changes needed
  • deps/rabbitmq_stomp/test/config_schema_SUITE.erl — snippets file drives this test automatically

Verification notes

  • ranch:info(RanchRef) counts the current connection in active_connections because ranch:handshake/1 has completed by the time process_connect/3 runs
  • check_node_connection_limit(undefined) returns ok immediately, ensuring Web STOMP is unaffected
  • The non_negative_integer validator from rabbit.schema is globally available
  • In a test with Limit = 0: the single test connection has ActiveConns = 1, 1 > 0 = true → ERROR frame returned ✓
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment