Hi team!
I've observed that the first call to the API endpoint /api/v1/address/:address
is extremely slow when querying addresses with large transaction histories. Subsequent calls are much faster, which suggests a caching behavior (possibly RocksDB block cache).
mempoolelectrs
:mempool/electrs:v3.2.0
api
:mempool/backend:latest
Performed via:
# (1 tx)
time -p curl http://api:8999/api/v1/address/bc1pe635ss7xaqj44czuz0sxfz8v6cpp8w3fmmk2cp62htwawcart9pssm8twq
# (3k+ txs)
time -p curl http://api:8999/api/v1/address/126cLS46uhg5KuKFeaQMpSjnPq8gBy44S6
# (400k+ txs)
time -p curl http://api:8999/api/v1/address/bc1qryhgpmfv03qjhhp2dj8nw8g4ewg08jzmgy3cyx
Address | Tx Count | First Query (s) | Second Query (s) |
---|---|---|---|
bc1pe635ss7xaqj44czuz0sxfz8v6cpp8w3fmmk2cp62htwawcart9pssm8twq |
1 | 0.12 | 0.01 |
126cLS46uhg5KuKFeaQMpSjnPq8gBy44S6 |
~3,600 | 8.44 | 0.17 |
bc1qryhgpmfv03qjhhp2dj8nw8g4ewg08jzmgy3cyx |
~446,000 | 407 | 26.7 → 25.4 |
I'm using the mempool/backend:latest
image configured with:
MEMPOOL_BACKEND=electrum
ELECTRUM_HOST=mempoolelectrs
CORE_RPC_HOST=app
(Bitcoin Knots)DATABASE_ENABLED=true
The Electrs backend is running mempool/electrs:v3.2.0
with:
/bin/electrs \
--address-search \
--cookie user:changeme123 \
--db-dir /electrs \
--network mainnet \
--electrum-rpc-addr 0.0.0.0:50001 \
--http-addr 0.0.0.0:3000 \
--cors '*' \
--daemon-rpc-addr app:8332 \
--jsonrpc-import \
--precache-threads 16 \
--electrum-txs-limit 1000000 \
-v
2025-05-11T13:38:23.436436336Z Config { log: StdErrLog { verbosity: Warn, quiet: false, show_level: true, timestamp: Off, modules: [], writer: "stderr", color_choice: Never }, network_type: Bitcoin, magic: None, db_path: "/electrs/mainnet", daemon_dir: "/root/.bitcoin", blocks_dir: "/root/.bitcoin/blocks", daemon_rpc_addr: 10.43.117.195:8332, cookie: Some("user:changeme123"), electrum_rpc_addr: 0.0.0.0:50001, http_addr: 0.0.0.0:3000, http_socket_file: None, rpc_socket_file: None, monitoring_addr: 127.0.0.1:4224, jsonrpc_import: true, light_mode: false, main_loop_delay: 500, address_search: true, index_unspendables: false, cors: Some("*"), precache_scripts: None, precache_threads: 16, utxos_limit: 500, electrum_txs_limit: 1000000, electrum_banner: "Welcome to mempool-electrs 3.2.0-e6eb9b5(dirty)", mempool_backlog_stats_ttl: 10, mempool_recent_txs_size: 10, rest_default_block_limit: 10, rest_default_chain_txs_per_page: 25, rest_default_max_mempool_txs: 50, rest_default_max_address_summary_txs: 5000, rest_max_mempool_page_size: 1000, rest_max_mempool_txid_page_size: 10000 }
Processor : AMD EPYC 7K62 48-Core Processor
CPU cores : 96 @ 1500.000 MHz
Note that we dedicate only 4 vcpu's to the mempool/electrs
service, hence --precache-threads 16
([default: 4 * CORE_COUNT]
)
YABS v2025-04-20 test (
curl -sL yabs.sh | bash -s -- -ig
)
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/rbd6):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 68.99 MB/s (17.2k) | 369.67 MB/s (5.7k)
Write | 69.19 MB/s (17.2k) | 371.62 MB/s (5.8k)
Total | 138.19 MB/s (34.5k) | 741.30 MB/s (11.5k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 415.55 MB/s (811) | 400.94 MB/s (391)
Write | 437.63 MB/s (854) | 427.64 MB/s (417)
Total | 853.19 MB/s (1.6k) | 828.58 MB/s (808)
YABS completed in 59 sec
RocksDB logs (from LOG
files in /electrs/mainnet/newindex/*
) show that the block cache is limited to just 8.00 MB
, which may be contributing to low cache hit rates and frequent evictions during queries on addresses with large transaction histories. While I'm not entirely sure this is the primary bottleneck, I looked into ways to increase the cache size — such as a BLOCKCACHE_MB
environment variable — but couldn't find any such option currently available. Is there a supported way to adjust the RocksDB block cache size, or could this potentially be made configurable?
Let me know how best to proceed — thanks!
I cannot create GH issue under https://github.com/mempool/electrs/issues nor https://github.com/mempool/mempool/issues for some reason (Error:

Unable to create issue.
), hence this gist.