Skip to content

Instantly share code, notes, and snippets.

@hawaijar
Created August 1, 2025 09:34
Show Gist options
  • Save hawaijar/a800b4b7b548d35b2e2293e9b205230a to your computer and use it in GitHub Desktop.
Save hawaijar/a800b4b7b548d35b2e2293e9b205230a to your computer and use it in GitHub Desktop.

Rate Limiter Implementations in Go

Production-ready rate limiting algorithms for protecting APIs and managing resource consumption.

Algorithms Included

1. Token Bucket

  • Best for: APIs needing smooth rate limiting with burst capability
  • Pros: Allows bursts, smooth traffic shaping, memory efficient
  • Cons: Tokens can accumulate leading to large bursts

2. Sliding Window Log

  • Best for: Billing systems, strict rate enforcement
  • Pros: Most accurate, no boundary issues
  • Cons: High memory usage (stores timestamps)

3. Sliding Window Counter

  • Best for: General purpose rate limiting
  • Pros: Good accuracy, reasonable memory usage
  • Cons: Slight approximation at window boundaries

4. Fixed Window Counter

  • Best for: Simple use cases, reset-based quotas
  • Pros: Very simple, minimal memory
  • Cons: Boundary spikes (2x requests possible)

Quick Start

// Token Bucket: 100 requests/second with burst of 200
limiter := ratelimiter.NewTokenBucket(100, 200)

// Check if request is allowed
if allowed, _ := limiter.Allow(userID); allowed {
    // Process request
} else {
    // Return 429 Too Many Requests
}

// Sliding Window: 1000 requests per minute
limiter := ratelimiter.NewSlidingWindowLog(1000, time.Minute)

// Check multiple tokens at once
if allowed, _ := limiter.AllowN(apiKey, 10); allowed {
    // Process batch request
}

Comparison Chart

Algorithm Memory Accuracy Burst Handling Use Case
Token Bucket O(n) users Good Excellent APIs with burst traffic
Sliding Log O(n×m) requests Perfect Good Billing, strict limits
Sliding Counter O(n×k) buckets Very Good Good General purpose
Fixed Window O(n) users Fair Poor Simple quotas

Production Tips

1. Choose the Right Algorithm

// For API rate limiting with bursts
tokenBucket := NewTokenBucket(
    1000,  // 1000 requests per second sustained
    5000,  // Allow bursts up to 5000
)

// For strict billing enforcement  
slidingLog := NewSlidingWindowLog(
    10000,           // 10,000 requests
    24*time.Hour,    // Per day
)

// For general purpose with good accuracy
slidingCounter := NewSlidingWindowCounter(
    6000,            // 6000 requests
    time.Hour,       // Per hour  
    60,              // 60 one-minute buckets
)

2. Key Selection Strategy

// By User ID
key := fmt.Sprintf("user:%s", userID)

// By IP Address  
key := fmt.Sprintf("ip:%s", request.RemoteAddr)

// By API Key
key := fmt.Sprintf("api:%s", request.Header.Get("X-API-Key"))

// Composite (user + endpoint)
key := fmt.Sprintf("user:%s:endpoint:%s", userID, endpoint)

3. HTTP Middleware Example

func RateLimitMiddleware(limiter RateLimiter) gin.HandlerFunc {
    return func(c *gin.Context) {
        key := c.ClientIP() // or extract API key
        
        if allowed, _ := limiter.Allow(key); !allowed {
            c.JSON(429, gin.H{
                "error": "Rate limit exceeded",
                "retry_after": 60, // seconds
            })
            c.Abort()
            return
        }
        
        c.Next()
    }
}

4. Distributed Rate Limiting

For distributed systems, consider:

  • Redis-based rate limiting with Lua scripts
  • Gossip protocols for eventual consistency
  • Local + global rate limiting hybrid

5. Monitoring

// Add metrics
allowed, _ := limiter.Allow(key)
if allowed {
    rateLimitAllowed.Inc()
} else {
    rateLimitRejected.Inc()
}

Memory Management

All implementations include automatic garbage collection for inactive keys:

  • Token Bucket: GC every 5 minutes
  • Sliding Window Log: GC after window expires
  • Fixed Window: Old windows auto-expire

Performance Characteristics

  • Token Bucket: ~100ns per Allow() call
  • Sliding Log: ~500ns per Allow() call
  • Sliding Counter: ~200ns per Allow() call
  • Fixed Window: ~50ns per Allow() call

Tested on 2.6GHz Intel Core i7 with concurrent access.

Learn More

For interactive visualizations and detailed explanations, visit: https://sdmastery.vercel.app/concepts/rate-limiting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment