Skip to content

Instantly share code, notes, and snippets.

@amb26
Created April 29, 2021 12:54
Show Gist options
  • Select an option

  • Save amb26/e64512ffbf10cf5dd91fa9f56cb4ae5c to your computer and use it in GitHub Desktop.

Select an option

Save amb26/e64512ffbf10cf5dd91fa9f56cb4ae5c to your computer and use it in GitHub Desktop.
Thu, Apr 15 2021
the-t-in-rtf
So, I was reviewing some older fluid-handlebars work and wanted to ask about your current thinking about conditional renderering in the new world.
The simplest example is having different class names on a particular element based on aspects of the model.
A more complex example would be a search, where you might have a bunch of results you want to render, or you might just want to display "no results found" or some other diagnostic.
If each result is a sub-component, that raises one set of questions.
If on the other hand you want to have one thing generate the whole list (or the "no results found" message), it raises another question, namely are there iteration operators.
These two things are most of the useful smart bits in a template language like handlebars.
there's also the ability to create "helpers", so I was wondering about expanders in the future and whether we would be able to refer those in whatever syntax we use to express what should be rendered.
But I kind of thought through some of your comments on everything being part of the model, and can see that the "smart" bits that need to be displayed could be managed by using transforms to stash bits that aren't just text to display onscreen.
As if all that wasn't enough, I was just thinking about next-gen model listeners.
like if everything is in the model and you want to rerender when particular model variables change, you might not want to do that for every update.
Can we add something to limit the frequency of updates, for example, when a model variable is a timer, or when you're displaying a waveform based on sound input.
You might get updates every millisecond but only care to rerender every second.
I guess for now you'd have to handle it by having something like bergson choose to periodically update the model variable that would result in rerendering.
As if that last bit wasn't enough, that leads me to my other question about "liveness". React knows which material is used in the rendering process, and knows which state/props changes should result in a rerender most of the time.
I'm wondering how that works WRT the new renderer.
Anyway, I will stop, just started pondering what the new world might look like on my bike ride this morning.
if you've got work in progress on any of this that would be enlightening to read, I'd be happy to see it.
Wed, Apr 21 2021
bosmon
Sorry not to get round to these qs earlier
The answer to your two questions are lenses in both cases - in the first case you lens a boolean flag, based on results !== 0, into the existence or nonexistence of the component showing "no results found"
And in the second case, rather than "iteration constructs", you have constructs which lens arrays into corresponding arrays of components
the-t-in-rtf
like the "sources" route for current dynamic components, but able to manage a changing list of items?
bosmon
Yes, exactly
The most advanced example we have so far is my rewrite of the old "Table of Contents" component
https://github.com/amb26/new-renderer-demo/blob/FLUID-5047/tableOfContents/src/js/TableOfContents-new.js#L70
Which shows that this idea also works recursively
Well that bit is the override
Here's the base definition: https://github.com/amb26/new-renderer-demo/blob/FLUID-5047/tableOfContents/src/js/TableOfContents-new.js#L28-L36
Key to this are advanced ChangeApplier primitives which allow you to track edits to arrays in an intelligent way - and therefore do the corresponding edits to the DOM with far less work than React achieves in the case of array insertions and deletes etc.
These have been "upcoming" for several years ...
Today I think I'll finally have time to turn to the renderer cleanup that has been pending all year, in which I will finally eliminate its "virtual DOM" idiom that I eventually realised was entirely unnecessary
the-t-in-rtf
nice!
bosmon
And so re "liveness" - the renderer can do all the usual dirty tracking that React can do, but some more as well
In that it doesn't merely know when things are "dirty" for some unspecified reason, but which parts of the old DOM more exactly correspond to the new DOM
Some of my insights earlier this year could be summarised as "How I learned to stop worrying and love the DOM"
Stuff about "limiting frequency of updates" will be achieved by various kinds of "valving" modelRelay rules which might have to wait for the massive "TangledMat" rewrite since right now our relay idiom is limited to being essentially stateless, although I imagine we could bend the idiom a bit if we got desperate
I almost made some stateful relay rules earlier this year as I was working on fluid.transforms.toggle
But realised that it was much better to just beef up the relay API so that it can consume both the old source and target models, as well as the new source model
sgithens
that's awesome
maybe I can revive my lichen model graph demo when that's ready
Thu, Apr 22 2021
the-t-in-rtf
I'm lichen the sound of that.
bosmon
:)
sgithens
hahahahaha
it harkens all the way back to the original night we were in the yellow deli till like 1 in the morning with Bogdanovitch creating the quartz composer clock timer in a few minutes
having just arrived at the Boulder Bus Depot and reporting straight to duty at THE YELLOW DELI!
I cannot wait to finally visualize the model relays
Yesterday
the-t-in-rtf
So, Antranig was sharing a potential future project with me, with some collaborative aspects.
Which got me thinking about who gets to update which parts of the tangled mat.
bosmon: Is there already some thinking about scenarios where some but not all content is available or manipulable?
so in your live website scenario, you might want for everyone to view some content, a few people to view additional content such as annotations, that's the simplest divide.
Similarly you might want for some people to be able to edit textual content, but have fewer stakeholders who can change the things that make the system work.
you might hold back the ability to edit options that make it possible to persist things, and certainly you'd still have "secrets" that are necessary for the system to operate, but which should not be exposed to viewers.
this is more "which parts are live for whom?" I guess
bosmon
Well, that's a pretty interesting issue
The way I'd frame it is to go up one further level of pluralism - that is, "which parts are live for whom from the perspective of whom" : P
What I/we imagine is a system in which all of the system might appear live, at least from the perspective of the individual who is using the interface - but that only some of those edits can be pushed out to update the view of the system as seen by others
But yes, certainly one wants systems that only apparently permit limited updates too - otherwise one ends up in the Smalltalk hell where a trivial edit can make the entire system inoperable
the-t-in-rtf
yes, that's what I was thinking of
bosmon
I was just on with Jonathan Edwards yesterday who was talking about this Smalltalk hell aspect
And because the system doesn't have very good coordinates or metadata about them, it is hard to draw up and share schemas that actually explain which parts of the system are meant to be modifiable for which purposes
It was great in the very ancient days when Sir Clive could say in his manual "Nothing you can possibly do at the command line can possibly damage the configuration of the computer"
But this is of course because all of its essential operating characteristics were encoded in ROM : P
the-t-in-rtf
yes, I guess I was hoping for (movable) "here there be dragons" kind of boundaries
where you could present a thing fairly simply but still allow "going behind the curtain" to change things if you want
the-t-in-rtf
What I/we imagine is a system in which all of the system might appear live, at least from the perspective of the individual who is using the interface - but that only some of those edits can be pushed out to update the view of the system as seen by others
That raises another wrinkle WRT persistence. Let's say I'm editing "my" instance of a, let's say, "composition", which originally pulled content (perhaps even itself) from some kind of persistence layer.
How do we reconcile my changes with incoming changes from others?
In one scenario I can imagine, "making it mine" also breaks the persistence.
In another I have to intentionally change the current value and also cut that material out of the persistence mechanisms.
A third scenario would make local changes ephemeral, like editing the DOM with browser dev tools.
jobara
If the changes are persistent, what happens when the system itself is updated or modified.
the-t-in-rtf
well, there's not one system.
I might opt out of parts of a shared contract without wanting to opt out of others.
but your question is exactly what we're discussing.
a subset of this problem is "unsaved local changes"
many systems let you do that and have a step to push that out into the wider world
even if that step is hitting enter.
bosmon
Yes - this is where I imagine git and github as the foundation of our pluralist system
"Your own edits" have the status of work in a private branch - whether this is persisted just into the browser or else into your own repo
And then we can invoke all the standard community workflow for deciding whether other people want to accept your pushed updates, comment on them, etc.
I am nudging Cindy steadily into building the underpinnings of all this stuff, starting with the little "inverted wordles" applet that we are currently noodling with in WeCount
bosmon
Now writing a massive doc comment just before fluid.renderer.resolveTemplateContainer which seems to be arguing that essentially all "new renderer" development over the last 2 years has been wrong
I've felt uneasy for a while that we ended up with a rigid "workflow system" that positioned all DOM manipulation as definitely after all model resolution
This seemed to rule out elementary things that had already been tackled in
the-t-in-rtf 's "gpii binder" which is capable of sourcing model values out of markup
And I guess last year was so stressful and contested I wilfully tried to ignore the fact that I seemed to have sunk a lot of costs into designing a system which isn't capable of coping with that
And then the wider question is - if we allow something resembling "DOM manipulation" to occur interwoven with model resolution, is the entire "workflow system" I painfully designed into the post FLUID-6148 framework faulty as well, aka "ModelComponentQix"
I am beginning to fear it really is
Especially when I see the great variety of alternative DOM update strategies which are being tinkered with in places like https://stackoverflow.com/questions/43401497/why-is-documentfragment-no-faster-than-repeated-dom-access
Some people, for example, note that simply deferring all DOM updates into requestAnimationFrame offers almost all the same benefits of using DocumentFragments
Although some also respond that there are also situations, which also seem to quite closely resemble ours, in which DocumentFragments are worthwhile
In that they are very much faster at cloning bigger segments of markup
So I guess we will carry on using them
It would certainly be a huge simplification if the "new new framework" could be free of the workflow system
Although some parts of it are clearly necessary for dealing with asynchrony
Anyway - step 1 seems to be - eliminate almost all of the activities of the "renderer workflow" phase, and allow as much as possible of the DOM to be woven together during all the other phases
And perhaps leave behind just a final "blue touchpaper" phase which takes all of these outputs and binds them into the real document either there and then using node.appendChild(documentFragment) or else schedules the same using requestAnimationFrame
sgithens
interesting, I'd never heard of Window.requestAnimationFrame before
Today
bosmon
Well... thoughts of last night are now tending to - AU REVOIR TO KULKARNI
I've been considering Infusion's error idiom, especially with respect to the kind of experience that Boxer offers, and think that it is probably time to reframe its "background mythology"
For many years, following that talk to the Boulder CS group given by visiting Milind Kulkarni about his parallel computing idiom, I've always imaged that Infusion was shaping up to be a system of that kind of "spatialised computation" - and in that idiom, if you found someone else had "done your work" already, the transaction you were in was backed out and erased as if it had never been
But now we are very much more clear that Infusion is a substrate for registering pluralistic human intentions, rather than an algorithmic computing system, I think it's clear that this is really inappropriate
As well as, in addition, ptcher 's considerable annoyance at finding his transactions simply backed out, leaving him to grub around for some rejection payload which he hadn't registered for in order to find out what had gone wrong : P
So, in Boxer's idiom, a "computation in error" simply sits there on the surface of the system - and the error itself is simply a part of its surface form
And then perhaps there is the option to tinker with it somehow in order to reduce its erroneousness and try it again
sgithens had shown me this funny "pipe" character in Boxer, which is the kind of "firebreak" it has in order to stop a freshly executed computation from immediately trying to execute again ...
It's the kind of little detail that strikes you immediately on trying to use Boxer but isn't foregrounded in any of the teaching materials that I found
Anyway, I think it's clear we need to more Infusion to a more permissive model for errors - and perhaps leave the "wipe out the transaction results" as an optional extra for environments like, say Kettle where you have some reasonable expectation that the original author has "gone away" at the end of a request
But I think it's clear in Jonathan Edwards' model as well that a "computation expands to a trace" - and so the CS idiom of wiping out a failed transaction as if it had never happened is yet another example of this "computational effacement" that we are so against in all other contexts
E.g. our general opposition to the idea of function calls at all : P
I was nudged into this as I was thinking more about where the "new renderer" really should store its data structures, and it's obvious that it should put them into Infusion's transaction records rather than scrawling them on top-level components as it currently does
And in the light of
the-t-in-rtf 's question yesterday, it's clear that partially executed transactions are really "sub-worlds" of the kind we were talking about - and the "new new framework" will be much more capable of representing them as such via its TANGLED MAT
It was always a bit clear in retrospect that our behaviour of wiping out everything that it constructed via transaction.cancel() if ever you receive a fluid.fail() was a piece of unnecessary brutality
Given that the natural idiom that the return value of any construction in Infusion has always been, since FLUID-6148, that you get a "partially constructed transaction"
This reminds me again of the upcoming "heapless" idiom of Infusion where rather than the current crazed workflow system, you simply have a giant list of paths that you are working through stashed in a giant priority queue - so that whenever it returns from some suspended I/O, the framework always knows exactly where it had got to in its worklist - it's simply whatever is at the front of the queue
And so the behaviour when suspending as a result of an error is just the same - it means that the user can themselves also simply look at the queue too in order to discover what was in progress when the error occurred
Rather than the framework's intention being locked up in a malign set of Russian doll closures or promise chains
And perhaps "errors" could even "taint" values with a blocking marker just as I/O might
And the framework could then get on with "executing" something else that wasn't in error
jobara
Does that mean we could build a debugger that would allow us to rewind time by sort of stepping backwards in the list, or maybe re-executing up to a prior step?
bosmon
Creating a system which could show you several errors at once, as the best linters do, rather than just tossing its cookies at the first error it runs to
jobara: back-in-time debuggers are certainly an elementary required thing, yes - all the hip kid frameworks have this already, such as React and Vue : P
But they can do this because of their authoritarian notion of where "state" is in the system - under the "reactor" model, there is simply a single giant gob of state at the centre of the system, and all other system state is a function of this state
So you can step back in time simply by restoring earlier versions of that reactor state
But this doesn't help you to understand why the state came to have that particular value
Infusion could have chosen such a cheap trick, but instead we have to accept that i) state updates can arise in any part of the system, ii) that we want the rules that transmit state around the system should preferably be encoded declaratively and integrally
So that questions like, "What part of the system could have updated this state" can be answered in closed form - preferably just by hovering over them : P
And this is the kind of crucial thing that is missing from systems like Boxer
It's impossible I think, even in theory, to understand what part of a Boxer system could influence any other part - since you have no easy way of knowing "what is in scope" from where
sgithens shared with me a modest-sized Boxer design for a "turtle running around a cube" and I think it is already incomprehensible
Although Andy di Sessa would no doubt disagree
bosmon
Re the timing problems I was talking about yesterday - here's the circle I didn't want to face last year
What if there was the following situation - i) that the choice of a renderer template URL was determined by some material inside the component's model, and that at the same time ii) the component's model wanted to draw its initial value from something in the template markup?
The current "workflow model" of Infusion just can't cope with this, since it insists that the component's model is "fully evaluated" before any rendering/interaction with markup begins
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment