export type ResizeObserverEntryCallback = (entry: ResizeObserverEntry) => void; | |
export class ResizeObserverManager { | |
#elementMap = new WeakMap<Element, Set<ResizeObserverEntryCallback>>(); | |
#elementEntry = new WeakMap<Element, ResizeObserverEntry>(); | |
#vo = new ResizeObserver((entries) => { | |
for (const entry of entries) { | |
this.#elementEntry.set(entry.target, entry); | |
this.#elementMap.get(entry.target)?.forEach((callback) => callback(entry)); |
const fetchMachine = Machine({ | |
id: 'telemetry', | |
initial: 'A_OFF', | |
context: { | |
timestampStart: null, | |
currentADuration: 0, | |
currentCounter: 0, | |
}, | |
states: { | |
A_OFF: { |
The goal here is to be able to dynamically add and spawn actors within xstate.
- The id provdied will be the one that resolved on the
system
object forsystem.get(id)
calls from other actors. - There isn't a lot of garbage colleciton or the concept of removing actors yet
- This probably works best with other full state machines
- Sending a message to a machine not spawed isn't captured or buffered in any way. The event errors or is lost.
Presented by Evadne Wu at Code BEAM Lite in Stockholm, Sweden on 12 May 2023
We have celebrated 10 years of Elixir and also nearly 25 years of Erlang since the open source release in December 1998.
Most of the libraries that were needed to make the ecosystem viable have been built, talks given, books written, conferences held and training sessions provided. A new generation of companies have been built on top of the Elixir / Erlang ecosystem. In all measures, we have achieved further reach and maturity than 5 years ago.
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
import { describe, test, expect } from "vitest"; | |
import { CreateLoop } from "./CreateLoop"; | |
/** | |
* TODO: | |
* - add a before all hook | |
* - add a after all hook | |
* - add a before state set hook | |
* - add a after state set hook | |
* - abiity to preprocess arguments |
mapped types syntax and intro | |
https://www.typescriptlang.org/play?#code/C4TwDgpgBA8gRgKygXigbwFBW1AZge3wC4oBnYAJwEsA7AcwBosc4BDCkmgVwFs4IKGAL4YMoSFACSAEwg1gVUAB4AKgD4U6ZtgDaAaSi0oAawgh8uKCoC6JFfuvDR46AGVWPaKhlyFy+AhqGAD0wTgAegD8omLg0AAKFPg8VKRUuCCqGqiYOFD6hjQmZhZWtlAAFACUKBqJyakQqg5BIrES9SlpuFQQ0pqdqemZAUGhEdEYsgDGADbs0NP4NORQFGDTJIPdvdKiSyvAeFzAXBReaxsAdGwU1SFheU-PUFGiuCdnEFfAABZyFQqADdWLMasgNLkcCDZtooOM3kIqkA | |
a tweet that prompted the idea for the talk | |
https://twitter.com/kentcdodds/status/1608187990215655424 | |
but since this isn't possible with satisfies (it doesn't participate in inference, in an example like that it only provides contextual types) the answer was to use a function with a reverse mapped type |
State machine code with no implementation:
const trainingMachine = createMachine<TrainingContext>({
predictableActionArguments: true,
id: 'training',
initial: 'initial',
context: {},
states: {
initial: {
on: {