Skip to content

Instantly share code, notes, and snippets.

@GabiGrin
Created July 17, 2023 17:23
Show Gist options
  • Save GabiGrin/5e987a4b173b373e3474c8dab99736a8 to your computer and use it in GitHub Desktop.
Save GabiGrin/5e987a4b173b373e3474c8dab99736a8 to your computer and use it in GitHub Desktop.
Batched Writes vs BulkWriter

Firestore provides two methods for writing multiple documents at once: Batched Writes and BulkWriter.

Let's understand them in more detail:

Batched Writes:

In Firestore, batched writes are a way to perform multiple write operations as a single atomic unit. A batch of writes completes atomically and can write to multiple documents.

  • Atomicity: All writes in the batch will either succeed or fail together. If any operation fails, the whole batch fails, and changes are not applied.
  • Limitations: There's a limit of 500 operations per batch. Each operation in the batch counts separately, so a batch of 500 operations can be made up of any combination of set(), update(), or delete() operations.

BulkWriter:

The BulkWriter class in Firestore is a newer feature that allows you to perform large scale writes in an efficient manner. It's designed to handle heavy write workloads and can automatically manage retries and backoff.

  • Atomicity: Each operation in a BulkWriter is independent of the others. If one operation fails, it doesn't affect the rest. They can be retried individually.
  • Batch Size Management: BulkWriter automatically splits your write operations into batches of 500, and it sends those batches as soon as they're full. This allows for efficient memory usage, especially when dealing with large write operations.
  • Rate Limiting: BulkWriter automatically handles rate limiting and retries on your behalf. It uses exponential backoff for handling failures, which is a strategy that gradually increases the wait time between retries to minimize the impact of network congestion.

Choosing between Batched Writes and BulkWriter will depend on your specific requirements. If you need atomicity for a large number of operations (up to 500), then Batched Writes is the way to go. However, if you're dealing with a very high volume of writes and need efficient memory management, automatic retries, and backoff, then BulkWriter is a more appropriate choice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment