Skip to content

Instantly share code, notes, and snippets.

@hawaijar
Created August 1, 2025 09:32
Show Gist options
  • Save hawaijar/222ec1a7f359ef0d2fa1c811d96ce46d to your computer and use it in GitHub Desktop.
Save hawaijar/222ec1a7f359ef0d2fa1c811d96ce46d to your computer and use it in GitHub Desktop.

Consistent Hashing Implementation in Go

A production-ready consistent hashing implementation with virtual nodes for building distributed systems.

Features

  • Virtual nodes for better key distribution (configurable replica count)
  • O(log n) lookups using binary search on sorted ring
  • Thread-safe operations with RWMutex
  • Minimal key redistribution when nodes join/leave
  • Replication support via GetNodes() for fault tolerance
  • Statistics and monitoring capabilities

Usage

// Create a ring with 150 virtual nodes per physical node
ring := consistenthash.New(150)

// Add nodes
ring.AddNode("cache-server-1")
ring.AddNode("cache-server-2")
ring.AddNode("cache-server-3")

// Find which node handles a key
node := ring.GetNode("user:12345")
fmt.Printf("Key 'user:12345' maps to node: %s\n", node)

// Get multiple nodes for replication
replicas := ring.GetNodes("user:12345", 3)
fmt.Printf("Replicas: %v\n", replicas)

// Remove a node (keys automatically redistribute)
ring.RemoveNode("cache-server-2")

How It Works

  1. Each physical node is mapped to multiple points on the hash ring (virtual nodes)
  2. Keys are hashed to find their position on the ring
  3. A key belongs to the first node found clockwise from its position
  4. Virtual nodes ensure even distribution and smooth rebalancing

Production Considerations

  • Replica count: 100-200 virtual nodes per physical node is typical
  • Hash function: CRC32 provides good distribution and speed
  • Monitoring: Use GetStats() and GetDistribution() to monitor balance
  • Thread safety: All operations are safe for concurrent use

Example Output

Key Distribution:
user:123 -> cache-2
session:456 -> cache-1
product:789 -> cache-3
order:234 -> cache-2
cart:567 -> cache-1

Adding new node 'cache-4'...
Only 1/5 keys moved (20.0%)

Load distribution (10,000 keys):
  cache-1: 2,489 keys (24.9%)
  cache-2: 2,534 keys (25.3%)
  cache-3: 2,456 keys (24.6%)
  cache-4: 2,521 keys (25.2%)

Use Cases

  • Distributed caching (Redis, Memcached)
  • Database sharding
  • Load balancing with session affinity
  • Distributed storage systems
  • Any system requiring consistent key-to-node mapping

Learn More

For an interactive visualization and detailed explanation, visit: https://sdmastery.vercel.app/concepts/consistent-hashing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment