Skip to content

Instantly share code, notes, and snippets.

@detro
Last active September 7, 2025 17:47
Show Gist options
  • Save detro/10c89f968d8d4070c9440ec92c51972c to your computer and use it in GitHub Desktop.
Save detro/10c89f968d8d4070c9440ec92c51972c to your computer and use it in GitHub Desktop.
Numbers to Know (cheatsheet) - Useful during System Design Interviews

Latency Numbers

Operation $ns$ (nano) $\mu s$ (micro) $ms$ (milli) notable 1Bx
L1 cache reference $0.5 ns$ $0.5 s$
Branch mispredict $5 ns$ $5 s$
L2 cache reference $7 ns$ 14x L1 cache $7 s$
Mutex lock/unlock $25 ns$ $25 s$
Main Memory reference $100 ns$ 20x L2 cache, 200x L1 cache $1.6\ mins$
Compress 1K bytes with Zippy $3,000 ns$ $3 µs$ $0.003ms$ $50\ mins$
Send 1K bytes over 1Gbps net $10,000 ns$ $10 µs$ $0.01ms$ $5.5\ hours$
Read 4K randomly from SSD $150,000 ns$ $150 µs$ $0.15ms$ $\approx1GB/s$ SSD $1.7\ days$
Read 1MB sequentially from Mem $250,000 ns$ $250 µs$ $0.25 ms$ $\approx4GB/s$ Mem $2.9\ days$
Round trip within same datacenter $500,000 ns$ $500 µs$ $0.5ms$ $5.8\ days$
Read 1MB sequentially from SSD $1,000,000 ns$ $1,000 µs$ $1ms$ $\approx1GB/s$ SSD (4x Mem) $11.6\ days$
Read DB row from cache $1ms - 5ms$
Read DB row from disk $5ms - 30ms$
Write DB row $5ms - 15ms$
Disk seek $10,000,000 ns$ $10,000 µs$ $10 ms$ 20x datacenter roundtrip $16.5\ weeks$
Read 1MB sequentially from Disk $20,000,000 ns$ $20,000 µs$ $20 ms$ $50MB/s$ (80x Mem / 20s SSD) $7.6\ months$
Send pack et CA->Netherlands->CA $150,000,000 ns$ $150,000 µs$ $150 ms$ $4.75\ years$

Note

The 1Bx column represents the values multiplied by 1 billion, to help mentally visualize the scale and proportions.

Also notable

  • Average server GOOD/BAD response time: $~250ms - ~1.5s$
  • Average server requests: $1k/sec - 10k/sec$ - Dependent on what it does
  • Internet users: $\approx5.5B$
  • Average internet user daily internet: $\approx2.5h$
  • Average size of internet packets: $\approx1.5KB$
  • Average network bandwidth modern server: $\approx25\ Gbps$

Availability

% Downtime/day Downtime/week Downtime/month Downtime/year
$99%$ $14.40\ minutes$ $1.68\ hours$ $7.31\ hours$ $3.65\ days$
$99.99%$ $8.64s$ $1.01\ minutes$ $4.38\ minutes$ $52.60\ minutes$
$99.999%$ $864ms$ $6.05s$ $26.3s$ $5.26\ minutes$
$99.9999%$ $86.4ms$ $604.8ms$ $2.63s$ $31.56s$

Units

$ns$ nanosecond $\mu s$ microsecond $ms$ millisecond $s$ second
$1ns$ $10^{-3}\mu s$ $10^{-6}ms$ $10^{-9}s$
$1,000ns$ $1\mu s$ $10^{-3}ms$ $10^{-6}s$
$1,000,000ns$ $1,000\mu s$ $1ms$ $10^{-3}s$
$1,000,000,000ns$ $1,000,000\mu s$ $1,000ms$ $1s$

Typical “components”

Redis

Configuration Type ops/sec read/write latency notable
Single Instance $100k/300k$ $\approx0.1ms - 1ms$ Not really relevant, as rarely one would deploy a single instance in production
Cluster Scaling (N instances) $200k$ per instance $\approx0.1ms - 1ms$ theoretically unbounded, but adds $\approx1ms - 2ms$ of latency per network hop
Redis Labs benchmark $1M$ Redis Labs press release documents reaching $1.2M$ transactions/sec with $<1ms$ latency: avoid during interview

Postgres

Configuration Type transactions/sec read latency write latency large tables JOIN
Modern single Servers $10k - 50k$ $\approx 10ms - 100ms$ $\approx1ms - 5ms$
Replication, partitioning, indexing, connection pooling $100k+$ $\approx0.1ms - 5ms$ $\approx 0.5ms - 2ms$ $\approx 10ms - 100ms$

Kafka

Configuration type rec/sec Bytes/sec
Single Broker $200k - 500k$ $\approx 100MB/s - 250MB/s$
Typical production Cluster $1M - 5M$ $\approx 1GB/s - 2GB/s$

AWS SQS

Configuration type msg/sec notable
Standard Unlimited: millions of msg/sec
FIFO $\approx 300msg/s - 3000msg/sec$ Range depends on the use of batching (dramatically improves performance)

AWS DynamoDB

Configuration type writes/sec latency
on-demand $100k - 1M+$ $\approx1ms - 10ms$ Auto-scaling, partitioned; effectively unlimited if no hot partitions.

Credit

Originally by Peter Norvig: http://norvig.com/21-days.html#answers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment