Skip to content

Instantly share code, notes, and snippets.

@hotsphink
Created July 26, 2025 23:51
Show Gist options
  • Select an option

  • Save hotsphink/bd3b82f333ada8d08d7ca5579148e2c0 to your computer and use it in GitHub Desktop.

Select an option

Save hotsphink/bd3b82f333ada8d08d7ca5579148e2c0 to your computer and use it in GitHub Desktop.
mkgist-created gist

This is more of a comment on the MEMORY-MANAGEMENT.md document than it is an issue with the proposal.

I find it difficult to understand the reachability implications of the proposal. The main section of the document (before the "weak map" section) directly addresses this, though I still find it hard to follow. Partly because it attempts to explain things in terms of weak references and strong references, and as far as I can tell nothing in this proposal uses weak references at all. If you are in a context and have access to an AsyncContext.Variable instance, then you can access the value. A weak reference would imply that you might not be able to get the value, if nothing else keeps it alive. (At least, that's the simplest and most common form of weakness.) This confusion is made worse by the "weak map" section, because it starts making recommendations about converting strong references to weak references at some point in time.

I think part of the problem is the unfortunate term "weak map". Weak maps implement ephemeron tables, and do not involve weak references at all. If you have a weak map M and a key K, then you can reach the associated value V if it exists. If you lack either M or K, you cannot reach V through the <M, K> -> V edge. It is true that having the key by itself is not enough to keep the value alive, which superficially sounds like a weak reference not keeping its referent alive by itself, but weak references are traversed if anything else in the system can reach the referent, whereas weak map entries are traversed if and only if both the map and key are reachable. In a sense, it's a strong reference from the conjunction of M and K to V.

The discussion of the expected lifetimes of async context maps is useful for optimization, but as an implementer I would like the semantics to be made more clear first. Async context maps are semantically ephemeron tables keyed by AsyncContext.Variable instances. Whether or not they share code with JS WeakMaps is an implementation detail.

One thing that might help is pseudocode descriptions of what happens for operations like setTimeout, await, setInterval, observer creation, and EventTarget.captureFallbackContext. The document currently talks about an agent's [[AsyncContextMapping]] field, but only gives prose descriptions of how that mapping is updated and restored. asyncVar.run(value, callback) is the exception -- it is nicely described in the text. Point1 is a little weird in that of course the reference to value is strongly held; that is the default, and if it were anything else it would require a detailed description of why you might not be able to access value even though the current context has a mapping from a key that you have in hand. (IMHO, the 2nd sentence on point 1 should be removed. Strong refs are the default. Anything weak requires an explanation of when the edge is or is not followed.)

For many of these async operations (such as setTimeout and .then), a callback is run once or multiple times in a task or microtask. In those cases, the operation can be seen as keeping a strong reference to the callback, and it will also keep a strong reference to the context that was current at the time that the API was called to start that operation. When the operation is finished, that reference will be removed.

You don't need to tell me the reference is strong. You do need to tell me what the reference is from. Perhaps it would be easier to read the spec for this sort of thing, but I'd rather see something like "setTimeout stores the callback along with the current value of [[AsyncContextMapping]]. When the timer expires, [[AsyncContextMapping]] is set to that value and callback is invoked. When it returns or yields, the previous value is restored" or whatever. If you're storing a field, then I know it's reachable as long as it's possible for it to be observed later. The interesting bit is whether the stored value can be discarded after the callback completes. That would make it easier to audit all the places that hang onto a mapping for a long time, since all of those have the potential to "leak".

With it, the context in which the passed callback is called also stores the current values of the given AsyncContext.Variables at the time that captureFallbackContext is called, and any calls to addEventListener in that context will store those values alongside the event listener.

Would this be easier to follow if it talked about snapshots? It seems that understanding the proposal already requires familiarity with snapshots.

What I care about are situations where either an AsyncContext.Variable instance or a context mapping becomes unreachable. There is text trying to address this, eg:

However, we do expect a weak map implementation to be useful in cases where a cross-realm interaction results in capturing AsyncContext.Variable keys from a different realm. This capturing happens implicitly through cross-realm function calls, and we wouldn't want to accidentally keep a whole realm alive.

"a weak map [would be] useful" is less helpful than "here are some situations where an AC.Var instance could become unreachable". While I am well familiar with cross-Realm edges keeping entire Realms alive, I'd like to first figure out how hard it would be to implement this stuff in a way that never keeps anything alive unnecessarily across the next GC, and only if that turns out to be intractable would I worry about how "bad" of a leak something might be.

Also, context maps becoming unreachable is separate from AC.Var instances becoming unreachable.

Async context maps should initially be considered to held their entries strongly, and then transition to be weak after a while.

Sorry, pet peeve again, but this "strong -> weak" is not the same as strong reference -> weak reference. I believe this is saying that as an optimization, a context map can directly reach all of its entries' values, then later transition to requiring the key to be reachable as well. A variant of this optimization are already present in the SpiderMonkey weak map implementation, and the information here about expected lifetimes is what is most relevant. Though it also seems like there may be opportunities to know that a given context map is definitely unreachable, which might make the optimization partly or wholly unnecessary (since we can proactively clear or deactivate the mappings at that time).

The useful information here: expected lifetimes, points where things are known to be unreachable, and (closely tied to the previous) the fact that context maps are never directly exposed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment