Skip to content

Instantly share code, notes, and snippets.

@appcypher
Last active December 20, 2024 11:13
Show Gist options
  • Save appcypher/88415139bcf8cc58023d5b5bc744baf0 to your computer and use it in GitHub Desktop.
Save appcypher/88415139bcf8cc58023d5b5bc744baf0 to your computer and use it in GitHub Desktop.
Multi-RAFT

How to Achieve Synchronization Across Multiple RAFT Groups

Achieving synchronization across multiple RAFT groups, especially for operations that span different partitions and require atomicity, is a complex but critical aspect of building scalable, consistent distributed systems. Below, we revisit the challenges and strategies for atomic operations across RAFT groups.

In a multi-RAFT architecture, each RAFT group independently manages a subset of data. While this design enhances scalability and fault tolerance, it complicates operations that must be performed atomically across multiple partitions.


Key Challenges

  • Independent Consensus Groups: Each RAFT group independently handles leader election, log replication, and state management.
  • Distributed Coordination: Coordinating actions across multiple groups introduces additional latency and complexity.
  • Consistency Guarantees: Maintaining strict consistency for multi-partition operations is non-trivial.
 ┌───────────┐              ┌───────────┐                ┌───────────┐
 │  RAFT G1  │              │  RAFT G2  │                │  RAFT G3  │
 │ (Shard A) │              │ (Shard B) │                │ (Shard C) │
 └─────┬─────┘              └─────┬─────┘                └─────┬─────┘
       │                          │                            │
       └───── Operations that need to span multiple groups ────┘

Strategies for Cross-RAFT Group Synchronization

1. Two-Phase Commit (2PC)

2PC is a classic distributed transaction protocol:

  • Phase 1 (Prepare): A coordinator asks all involved RAFT groups if they can commit the proposed changes.
  • Phase 2 (Commit/Rollback): If all groups agree, the coordinator sends a commit; otherwise, a rollback.
     Coordinator
         │
 Phase 1 │ Prepare
         v
 ┌───────────┐        ┌───────────┐
 │  RAFT G1  │        │  RAFT G2  │
 └─────┬─────┘        └─────┬─────┘
       │  yes/no            │ yes/no
       v                    v
   If all yes: Commit else Rollback

Pros: Ensures atomicity.
Cons: Can be blocking and introduces latency.

2. Three-Phase Commit (3PC)

3PC adds a pre-commit phase to reduce blocking:

  • Can Commit?
  • Pre-Commit
  • Do Commit

This reduces the chance of blocking but is more complex and adds latency.

3. Distributed Transaction Managers

Use a dedicated transaction manager service that orchestrates multi-group commits:

  • Coordinator Service: Tracks transaction state.
  • RAFT Groups: Participate by preparing and committing changes under coordinator instructions.
              ┌──────────────────┐
              │ Transaction Mgr  │
              └───┬──────────────┘
                  │ coordinates
   ┌──────────────┼───────────────┐
   v              v               v
 RAFT G1       RAFT G2         RAFT G3

4. Optimistic Concurrency Control (OCC)

With OCC, each RAFT group proceeds optimistically, only validating at commit time. If conflicts are detected, the transaction retries.

Pros: Fast when conflicts are rare.
Cons: Requires a solid validation phase and possible retries.

5. Sagas and Compensation-Based Transactions

Sagas split a multi-step transaction into a series of local steps across RAFT groups, each with a compensating action if something fails later.

Pros: More scalable, avoids long-held locks.
Cons: Complex to define compensations and doesn’t provide strict ACID semantics.

6. Application-Level Coordination

Put atomicity logic into the application layer using logical transactions, idempotent operations, and retries.

Pros: Highly flexible, domain-specific.
Cons: Complexity shifts to the application developer.


Real-World Examples

  • CockroachDB: Uses 2PC across multiple RAFT ranges to ensure atomic multi-range transactions.
  • TiKV: Employs 2PC and concurrency control across multiple regions (managed by RAFT groups).

Considerations and Best Practices

  • Failure Handling: Plan for coordinator crashes and partial failures.
  • Performance Impact: Understand the latency and throughput trade-offs.
  • Scalability: Choose methods that scale with your number of RAFT groups.
  • Idempotency and Retries: Make operations repeatable without harmful side effects.
  • Monitoring and Observability: Track transaction states and errors across multiple RAFT groups.

Conclusion

Synchronizing and ensuring atomic operations across multiple RAFT groups can be achieved using protocols like 2PC or 3PC, distributed transaction managers, OCC, Sagas, or application-level logic. Each method has trade-offs in complexity, performance, and consistency. The optimal strategy depends on your system’s requirements, performance goals, and operational complexity tolerance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment