There is an ongoing shift in programming towards a more constrained mindset, recognizing shared mutable state and side effects as significant sources of accidental complexity and concepts like immutability and reactive, unidirectional pipelines as ways to overcome it. Simplicity is less optional than before, based on an understanding about the limited capacity of human working memory and based on programming language theory and practice. The functional programming ideas seeping into the mainstream leave less people to say that they wouldn't need them because of maybe never having used them.
The basic idea of controlling complexity through constraints is not novel at all, though, and it raises a valid question about why this shift has taken so long to develop, despite the overall trends towards automation and productivity. For example, the distinction between accidental and essential complexity in software is from a book published in the '80s, but it took until 2006 for the Out of the Tar Pit paper to emphasize state as a source of accidental complexity and to propose solutions for controlling complexity (that have enough similarity to the ones now being adopted to vindicate the paper).
The answer to the question of why it's taken this long is that simplicity is hard, or in Tony Hoare's words:
[…] there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
The solutions express the difficulty of the problem by all having varying levels of downsides, like functional languages lacking a familiar syntax and requiring knowledge of category theory jargon.
Redux architecture exemplifies the functional mindset in JavaScript, replacing approaches like two-way data binding with a unidirectional data flow with immutable state updates. The advantage of Redux is that each state is like a complete snapshot in time (depending on how much local state is used), and that it gives developer guarantees about the direction that updates will be propagated, making state more predictable and testable.
Adding const
assignments, pure array iteration methods like Array.map()
and the ...
spread syntax has improved the user story for immutability in vanilla JavaScript, but the overall support is still not first-class, and there are still significant rough edges, like that immutable state updates rely on making shallow copies of objects. Examples from the Redux documentation highlight the eye-watering verbosity of deep immutable updates using shallow copying:
const updateVeryNestedField = (state, action) => ({
...state,
first: {
...state.first,
second: {
...state.first.second,
[action.someId]: {
...state.first.second[action.someId],
fourth: action.someValue,
},
},
},
});
Inserting an item into an array is likewise verbose:
const insertItem = (array, action) => [
...array.slice(0, action.index),
action.item,
...array.slice(action.index),
];
Code volume was the second major source of complexity identified in the Out of the Tar Pit paper, and verbosity contributes to code volume, putting it at odds with the overall goal of tools like Redux.
Using the spread syntax for immutable updates also leaves enforcement to just personal discipline or code reviews, which is less robust or process-oriented than enforcing it at the level of programming environment.
There are several userspace solutions that address the verbosity of deep immutable updates in vanilla JavaScript, the inefficiency of copying, and the lack of enforcement, where the most notable example until recently was Immutable.js, but these solutions come with their own significant pitfalls, like being heavy in both the size of the API surface (especially with Immutable.js and its Java-inspired API), and in the need to do constant conversions between the immutable container and plain data structures like arrays and objects used by most other JavaScript code. Functional abstractions like lenses or optics that deal with deep immutable updates have not seen wide adoption in JavaScript.
Enter Immer
Immer addresses the problems of interop, deep updates, API surface size, enforcement and excessive type conversions with a stroke of elegant simplicity, using the familiar mutative APIs of JavaScript on a draft object and producing an updated plain object or array that's also immutable and still space-efficient because of structural sharing. The earlier examples become something as readable as:
const updateVeryNestedField = (state, action) => produce(state, draft => {
draft.first.second[action.someId].fourth = action.someValue;
});
const insertItem = (array, action) => produce(array, draft => {
draft[action.index] = action;
});
Immer implements structural sharing and tracks updates to the draft object using ES2015 Proxy
with a slightly less performant fallback for legacy ES5 environments.
A useful feature compared to immutable updates by copying is that Immer preserves referential equality of objects if the updated properties are equal, which simplifies using objects for memoization (like passing objects as props to React.memo()
components):
const initial = { foo: 1 };
console.log(produce(initial, draft => { draft.foo = 1; }) === initial); // true
console.log(produce(initial, draft => { draft.foo = 2; }) === initial); // false
console.log({ ...initial, foo: 1 } === initial); // false
As a userspace solution, though, there is a notable limitation in that it's not possible for Immer to implement structural equality or value semantics without leaking memory, unlike in more natively functional environments:
console.log(produce({ foo: 1 }, id) === produce({ foo: 1 }, id)); // false
Open systems like servers and user interfaces need to model changes to be useful, so the question isn't about eliminating state or change but about controlling the complexity involved by making changes predictable. Local mutation is much simpler than non-local (global in the worst case), so it's not the problem being addressed by immutability.
To put it differently, it's not a side effect unless it interacts with changes outside the boundaries of the current function, so the Immer producer functions are still pure (non-side-effecting), despite using imperative assignments, because the changes are local.
Immer has received significant recognition, for example, by being included in the Redux Starter Kit, but it's still too common to see shallow copying as being the go-to pattern for immutable updates in JavaScript, or, worse yet, seeing shared mutable state. Immer's elegant simplicity positions it very well for becoming the go-to solution for avoiding shared mutable state, and popularizing immutability aligns with the broader need to shift attention towards better reasoning about code and avoiding errors in the first place instead of relying on mitigation like detecting errors in tests.
- Goggles – A Scala library for deep immutable updates with a similar rationale to Immer
- Simple Made Easy – Rich Hickey's classic talk about controlling complexity
- Overview of immutability in vanilla JavaScript at 2ality
I am impressed by the originality of the idea. I do wonder, though: if the internal implementation of mutations to an object leads to a list, or a tree, of previous versions of the same objects being referenced through proxies, isn't this growing memory usage all the time? And isn't this making looking up attributes of an object an O(n) operation, where n is the number of mutations it went through?