We want to make sure that our distributed backend data store can run concurrently and give us results as if it was running sequentially
We’re used to writing code as if there is only one front-end and only one replica in the backend
- When a write occurs, all the replicas need to have the new data
- Single-store semantics makes things easy to think about
- It’s hard to make sure the code runs properly
- We cannot always guarantee linearizability in the backend
When replica A receives a write request, A sends requests to the other replicas to update their data
The frontend has to talk to make a request to all replicas to make a write request and wait for acknowledgements from all of them
- see the last write
- have total ordering of requests
- everything runs in program order (rather than real-time order)
- writes can be delayed arbitrarily which is what makes this a non-linearizable data store
writes -> — a -> b -> c — <- reads
- The frontend can be update just replica A, then A talks to the next replica in the “chain”
- When the frontend reads, it asks the last replica in the “chain”
- RAM is not linearizable, but it is sequential consistent
- We use memory barriers to pause a program to wait for all writes to be finished before we read
Both Sequential Consistency and Linearizability offer single-store semantics
This is code is not sequentially consistent or linearizable, but it is causally consistent because W(x)3 and W(x)2 are running concurrently so it is okay that the later reads are getting different values