Skip to content

Instantly share code, notes, and snippets.

@inca
Created November 11, 2022 08:48
Show Gist options
  • Save inca/38535ab2f2e5ce875bea643df521df53 to your computer and use it in GitHub Desktop.
Save inca/38535ab2f2e5ce875bea643df521df53 to your computer and use it in GitHub Desktop.

Realtime Collaborative Editing

This document describes both high level and more in-depth aspects of realtime collaborative editing. If you're busy, here's a TL;DR:

  • users expect it
  • we need it
  • we can do it
  • we can sustainably support it
  • there are a number of problems to solve of various severity, all of them solvable

If the above doesn't answer your questions, read on.

Principles

These statements drive the entire research and are oftentimes the reason each particular design decision is made.

Realtime

We live in the world where certain expectations are taken for granted.

If you edit something online and can share it with someone else, people expect things to "automatically synchronize", whatever that means.

Google Docs, Figma and others set a high bar by offering a smooth UX when it comes to collaborative editing. Those were preceded by multiplayer games which were doing this for decades, albeit in a slightly different field and setup.

Lean

We are not Google (yet) and cannot afford dedicating a couple racks of servers in 36 regions just to make the users see each others' cursors. Without "lean" the operational aspect of this feature will make it economically unfeasible.

Simple

We are not Google (yet) and don't have a dedicated team of 20 engineers in charge of a single feature. Without "simple" we will drown in bug reports ultimately caused by our inability to fully grasp how the system works.

For obvious reasons "simple" is very subjective and thus might not be particularly helpful in practical terms. Also, there is no such thing as "a simple solution to a very complicated problem", so in a way it is impossible to achieve.

With that in mind, let's treat "simple" as "loosely coupled". Loose coupling means two things:

  • you are able to decompose a huge problem into a subset of smaller problems, each of which can be solved separately;
  • changing one part does not cause a ripple effect on everything else.

Expectations

Two users open the same graph (assuming both have edit priviliges). They expect:

Must haves

  • when someone makes any change, that change to immediately be visible to the other person and vice verse
  • the graph to be automatically "saved" in the cloud without needing to File -> Save or Cmd+S
  • the graph to appear exactly the same for both users

Should have

  • to see that another person is present there (typically via a badge in some corner)
  • to see each other's cursors and selection state

Won't have

  • to be able to disconnect from the Internet and keep editing

Problem Overview

Based on the current understanding of the problem an assumption is made that >90% of the time the changes made will not produce conflicts and that connectivity would be stable enough to allow for a very smooth and timely delivery of events. In other words, most of the time we will be dealing with happy cases, so we should focus on making the happy cases fast and keeping the impact of non-happy cases to the minimum. Let's refer to this strategy as optimistic.

The following high level topics roughly describe what's involved.

  • Each document should have a single source of truth which should be a centralized database.
  • Modifications need to be applied locally, recorded, send to backend, broadcasted to other clients where they can be subsequently applied.
  • Endpoints for accepting and applying modifications should be fast and reliable. This requires the underlying database design to ensure the atomicity of each update, so that nothing is accidentally lost or overwritten.
  • The order of concurrent modifications cannot be guaranteed. When two users edit the same field, the concern is not "who wins", the concerns is that the document must converge (i.e. they both should see the same document eventually).
  • Realtime editing experience is powered by the delivery of events over persistent connections.
  • Brief losses of connectivity are to be expected and should be handled seamlessly, if possible preventing quirks from ruining the UX.
  • Clients should proactively reconnect when disconnected due to network failures, backend deployments or timeouts.
  • Event loss may occur during losses of connectivity. Upon reconnect we need to re-sync the state with the backend.
  • Events should be treated as discardable and should never be considered a source of truth. Based on that the clients need to periodically validate their versions of documents with the source of truth.
  • The validations should be frequent to reduce the possibility of divergence. Therefore they should be cheap.
  • Users can tolerate temporary discrepancies, as long as they are eventually resolved. This allows for relaxing some of the consistency requirements like two users adding an array item at the same time and temporarily seeing two different versions of the same array.
  • Each persistent connection should map to a single editing session.
  • Session updates should be persisted and broadcasted. Sessions can also be destroyed when the corresponding connection is destroyed. They should also expire if backend accidentally couldn't remove it (e.g. due to a bug, or a crash).
  • Events should be partitioned by graph. Clients should only receive the events about the graphs they're currently editing.
  • Internally the events may also require partitioning, albeit this is harder to achieve and may require future research.
  • For better load distributions client connections should have hard timeouts (e.g. force reconnect every 5 or 10 minutes). This allows addressing a situation where over-saturated pods won't benefit from adding more pods, because clients would remain connected to all pods indefinitely.

In-Depth Problems and Explanations

This section is very detailed and technical. You can probably skip it, unless you're really curious.

Fat models

NodeScript takes full advantage of Object Oriented Programming, a very powerful and widely spread paradigm for software construction. It is one of the best things that has ever happened to programming largely due to its main methodological pillars: encapsulation and abstraction. Those two in conjunction allow localizing the problems and solving them individually, promoting loose coupling, separation of concerns, etc.

There are numerous design patterns in OOP, one such pattern is "model" which promotes keeping functions closer to data structures. Graphs have a lot of functionality necessary for the editing (e.g. topology, invariants, commands), so model is a very natural fit for them.

Problem is: sometimes models get "fat". In NodeScript understanding what properties each node can support is impossible without knowing about the module powering the node. So Graph model, in order to be functionally complete, must know about all the modules it uses — which in our case means downloading them from the Internet.

This is what the initial version of editor did, and it definitely allowed for the job to be done.

With concurrent editing we now have a need to apply modifications on the backend. The "lean" principle here prompts a question: can we get away from having to know about the modules on the backend?

The answer is: yes, we can and we should. But it's also difficult, because the fat model needs to be broken into two things:

  • an "underlying" document, which should be a plain old JSON without any functionality attached to it
  • a "view" of such document, which answers all the topology and module-related questions

Things like visual editor and complier definitely need the "view" (which is still somewhat heavy), but as far as the editing goes, backend only needs an underlying document.

Decision made: break the "fat" model into a "plain JSON document" (referred to as a "spec") and a view of that document.

A few practical concerns (already addressed):

  • now invariants are applied "on read" as opposed to "on write"
  • some data schema changes are necessary
    • nodes converted from NodeSpec[] into { [key: string]: NodeSpec }
    • props converted from PropSpec[] into { [key: string]: PropSpec } are are now accessed by key
    • props and entries separated

Applying modifications

Let's start with how JSON work.

Graphs are JSON documents with known-upfront structure. They currently look like this:

{
    "nodes" {
        "someNodeId": {
            "props": {
                "key": {
                    "value": "123"
                }
            }
        },
        "someOtherNodeId": { ... },
    },
    "metadata": {
        ...
    }
}

Each user loads the graph document when they start editing. This JSON document is used to render the initial canvas with all the nodes, links and property values, as well as for computing the result of each node.

When users make changes, the JSON fields are modified by frontend code directly. For example, some code path could lead to a modification like this:

graph.nodes['someNodeId'].props['key'].value = '345';

That change happens as part of frontend code. Without additional considerations it is applied to JSON document in-place, so all we have is a single JSON document which is constantly up-to-date and perfectly in-sync with what the user sees.

Except, it's local to the user and resides in user's (highly volatile) memory.

The idea behind making the editing collaborative is to introduce a way of applying modifications that can be replicated. This implies encapsulating each possible change in a data structure itself, so that it can be sent over the network, stored in the database, broadcasted to connected clients.

Disregarding esoteric approaches, there are only two strategies to consider:

  1. Domain-specific: each command maps to exact modification supported by the editor. For example:

    {
        "name": "setPropValue",
        "nodeId": "someNodeId",
        "propKey": "key",
        "value": "345"
    }

    The upside of this approach is that it's pretty much universal and allows encapsulating just about any change by adding a new command type. They also can be more compact to store compared to other alternatives.

    The downside is that the more and more logic needs to be implemented for each command type (and possibly a schema too). This means more work, more complexity and, given that the commands are powered by custom code, it is also very easy to introduce accidental coupling to other things (like knowledge of the module, for example), which may further complicate things.

  2. Arbitrary JSON modifications.

    One such example could be an RFC6901 JSON pointer, for example any arbitrary modification could be decribed with following data structure:

    {
        "path": "/nodes/someNodeId/props/key/value",
        "value": "345"
    }

    Despite being quite universal and standardized, JSON pointers are very limited when it comes to arrays as they only allow index-based access and appending an item to the end of the array. More on that later.

    Another issue with JSON pointer is figuring out how to deal with missing path components (e.g. what would happen if you apply the above modification to an empty document?). Since the behaviour is not specified in the spec, existing implementations deal with re-creating the missing paths differently and do not really allow fine-grained control over what to do in each specific case. More on that later.

From a glance it seems that any of the two approaches would work, but the second one — provided that the mentioned problems are addressed — seems much much simpler.

Reducing the problem of modifications to a general-purpose well-defined library allows subtracting the complexity from the rest of the system, which counts towards "simple" principle.

Decision made: go with general-purpose modification library.

Modification specifics

Comparing with JSON pointer proved useful because allowed to identify the missing parts. Those are:

  • when a path component is missing, we can't have a one-size-fits-all approach; specifically:

    • when a node is missing (e.g. say we're modifying node's property and someone deletes it), then we don't want to create a "ghost" node with just that change applied — we want to reject that change instead
    • on the other hand, when a path component maps to an object structure, we want to transparently create it if it does not exist (e.g. if key property is missing, and we're editing it, we want to create it)
  • when array element is modified, we need to find that element by something more stable than index; typically something like .find(_ => _.id === "someId") is used inside the code to target individual element for an update, so the library should support the same

  • things like "insertBefore" and "removeWhere" are required to make CRDT-like modifications to array entries

All those considerations materialized in a very succinct library called CCM (short for "Convergent Concurrent Modifications"). It features a tiny query language that allows specifying the "initializer" for missing components (which could be either an empty object or an empty array), as well as a syntax for finding a component by field value (strict equality only).

The example modification from above would look like this in CCM:

{
    "query": "nodes{} someNodeId props{} key{} value",
    "value": "345"
}

Note: {} in the end means "create an empty object if it's not there; someNodeId doesn't have that — so the modification will be discarded if there is no such node.

Graph editing overview

These diagrams cover most important happy cases happening during regular editing activities.

sequenceDiagram
    participant User
    participant Pod
    participant Redis
    participant Database
    participant Other Pods
    participant Other Users

    Note over User: Opens a graph

    User ->> Pod: openGraph

    Pod ->> Database: get graph snapshot
    Pod ->> Pod: check access
    Pod ->> Pod: apply pendingMods

    Pod ->> Database: create session
    Pod ->> Redis: ***sessionCreated
    Redis ->> Other Pods: ***sessionCreated
    Other Pods ->> Other Users: ***sessionCreated

    Pod ->> Database: fetch all sessions of graph
    Pod ->> User: { graph, sessionId, sessions }
Loading
sequenceDiagram
    participant User
    participant Pod
    participant Redis
    participant Database
    participant Other Pods
    participant Other Users

    Note over User: Makes a change

    User ->> User: apply modification locally
    User ->> Pod: modifyGraph

    Pod ->> Database: append modification to pendingMods
    Pod ->> Redis: ***graphModified
    Redis ->> Other Pods: ***graphModified
    Other Pods ->> Other Users: ***graphModified

    Other Users ->> Other Users: apply modification locally
Loading
sequenceDiagram
    participant User
    participant Pod
    participant Redis
    participant Database
    participant Other Pods
    participant Other Users

    Note over User: Navigates away or closes window

    User ->> Pod: <connection closed>

    Pod ->> Database: delete session
    Pod ->> Redis: ***sessionClosed
    Redis ->> Other Pods: ***sessionClosed
    Other Pods ->> Other Users: ***sessionClosed
Loading

Graph Condensing

To make modifyGraph fast and atomic we append each modification to pendingMods. They will quickly accumulate to an unreasonable amount, so we need a separate process to condense these modifications into a snapshot.

sequenceDiagram
    participant CondenseGraphsTask
    participant Database

    CondenseGraphsTask ->> Database: lock N documents that have pending mods
    CondenseGraphsTask ->> Database: read locked documents

    opt for each document
        CondenseGraphsTask ->> CondenseGraphsTask: apply pending mods
        CondenseGraphsTask ->> Database: atomically update, preserve mods happened after read
        CondenseGraphsTask ->> Database: unlock
    end
Loading

A special care must be taken to not overwrite the modifications happened since the time was locked and read by the condenser task.

Also the locks should have a timeout mechanism incorporated into the locking query.

Despite all the locking condensing tasks are supposed to be pretty fast, provided that they are scheduled frequently to not allow a huge stack of modifications to accumulate. We'll start with running the tasks every minute and then see if it's too frequent, or not frequent enough, or just about right.

Graph Validation

Once graphs are condensed on the backend, we can broadcast an event for everyone to validate their documents. Kind of like a check-in to see if something has diverged significantly.

This allows promptly responding to any potential divergence that can happen in edge cases. For example if two users edit the same array and keep pressing Add/Remove buttons which may cause them seeing two different pictures of the same graph.

sequenceDiagram
    participant User
    participant Pod
    participant CondenseGraphsTask
    participant Redis

    User ->> User: compute hash after each local change
    User ->> User: store hash in list

    Note over CondenseGraphsTask: finished writing snapshot

    CondenseGraphsTask ->> CondenseGraphsTask: compute hash
    CondenseGraphsTask ->> Redis: ***shouldValidate { graphId, hash }
    Redis ->> Pod: ***shouldValidate { graphId, hash }
    Pod ->> User: ***shouldValidate { graphId, hash }
    User ->> User: compare hash with local hash list

    opt hash does not exist in list
        User ->> Pod: reload graph
        User ->> Pod: clear hash list
    end

    opt hash exists
        User ->> User: drop hashes before the received one
    end
Loading

Note: clients should store a list of hashes occurred since the last shouldValidate event. This is because the snapshot can lag behind your local modifications. The logic is as follows:

  • If the hash received from the server exists in your local list, then your local version is likely correct. You should drop this hash and the hashes prior to it (assuming you store them in append-only list).
  • If the hash received from the server does not exist in your local list, then at some point your version diverged from source of truth. You should reload the graph in that case.

FAQ

  • C'mon, it's 2022. Isn't this a solved problem already?

    Yes, it is solved quite successfully by a number of projects. This does not mean we can do (nor afford) what they are doing.

    All the well-known examples are implemented in-house taking into account the specifics and with full understanding of the numerous trade-offs and how to deal with them.

  • Isn't there an off-the-shelf solution we can pick up instead of re-inventing the wheel?

    Short answer: no.

    Long answer follows.

    There are a number of scientific studies into the field of conflict-free replicated data types (CRDTs), most notable ones are lead by Martin Kleppmann, a well-known researcher and public figure. The numerous research contain theoretical basis for general-purpose data structures and algorithms that allow for multiple documents to "converge" after being edited by multiple parties and subsequently merged. In theory this allows not only for realtime collaboration, but also for decentralized architecture (where there is no single source of truth), as well as offline editing and much more.

    All of this is sounds pretty great and convincing. In practice, however, there's a number of aspects that make adoption of off-the-shelf solutions difficult if not impossible. Based on analysis of open source JavaScript libraries which are "closest to production readiness" in 2022 and that also support arbitrary JSON manipulations vs. just text (Automerge, YJS):

    • they feature a "fat" document model with their own implementations of operations, transports and syncing, with zero guarantees that it fits your particular stack — kind of a combination of vendor lock-in and the "all in" problem

    • they require you to store the entire history of changes in order to guarantee convergence

    • they are not slim, nor easy to learn and adopt which makes time saving argument very questionable

    • they still force you to understand and work around the various trade-offs (e.g. "tombstones", "schema compatibility", etc)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment