Skip to content

Instantly share code, notes, and snippets.

@phunanon
Last active July 8, 2024 13:05
Show Gist options
  • Select an option

  • Save phunanon/49601ff520825d33ba00f4b74dbecb62 to your computer and use it in GitHub Desktop.

Select an option

Save phunanon/49601ff520825d33ba00f4b74dbecb62 to your computer and use it in GitHub Desktop.
Chika v2:
- Manifesto: an OS that is performant, lean, multi-process, non-realtime, Arduino & Linux compatible, with a dynamically compiled dynamically typed language
- Yes to heartbeats
- Yes to binds
- Yes to bursting
- Yes to Arduino + Linux
- Yes to no floating point
- No to hard persistent storage dependency
- No to exeForm (and call them expressions rather than forms)
- like Insitux, just count the number of arguments
- glossary: context - the thing persisted between heartbeats
- compile-time type-checking where possible
- runtime type-check is an operation
- (+ 2 2 2) in memory: [gap] 2 2 2, answer put in gap
- avoids having to collapse the stack
- only when type length is known
- compile opt-out switch due to increased memory usage
- maybe not just gap, but for compatible operations could overwrite the first argument
- custom call stack?
- worth it?
- workable with multi-processes?
- sleep/delay is asynchronous
- investigation shows that sleeping was enacted between heartbeats
- tail call optimisation is even easier
- perhaps recursive functions could be signed, with their X number of recursions encoded in a separately? Perhaps their hashes are only 24 bits long
- asynchronous, simpler messaging
- topics are 32bit, h"topic" compile-time hash helper syntax
- topics are checked like flags at heartbeat
- N slot hash table with linear probing (N configured at Chika compile-time)
- drawback: message payloads are stored where?? Chika used synchronous messages for this reason: the context and payload were passed to the handler, and it became the new context
- maybe message payloads can only refer to data in the sender's heartbeat context? Like "if you care about X, here's the pointer to my context"
     - this approach would make synchronous infinite loops impossible
     - if topics were instead function names, could drive logic for programs without needing a heartbeat - sending messages to itself; but I think I like checking flags better...
- I don't think payloads could be stored on a dedicated stack, because that stack would be both read from and written from in one loop of programs
- incremental compilation
- references are typed (e.g. i$myInt)
- binds are automatically typed (e.g. myInt= 3)
- errors are messaged
- runtime graceful halt upon error
- items are prefixed with their descriptors
- unless Chika can disprove the hypothesis that true random descriptor accessing is rare or not availed of
- a quick look reveals it moves a pointer to calculate things like length of multiple items
- it operated with itemnum often, and getting where it is in memory involved counting through descriptors
- "engine" not "virtual machine"
- rather than op-codes, 16 bit C++ function pointers?? (direct threading)
- potentially massive performance boost, though goes from 1 byte per operation to 2
- I think 16bit addresses work up to 128kB progmem as they refer to words
- what's up with having to duplicate items before processing? Seriously?
- I remember trying to implement a hybrid approach of constants and copies, which slowed things down more than just copying
- there must be a way to opt-out of copying arguments... I need to relearn why operations don't operate directly on arguments
- Chika prolifically used item numbers e.g. (+ 2 2 2) would be three arguments starting at 0, stacking the answer then collapsing the stack
- what if arguments were a list of pointers instead?
- bound values were copied to the top of the stack to form arguments - couldn't it instead just add to the argument pointer list?
- major drawback: pointers use more memory (2B per argument) than indexing arguments (2B), with intermediate arguments being quite common in practice e.g. (+ 2 (+ 2 2)) - however, that example maybe uses the same amount of memory ish
- but is 2B not better than copying what might potentially be dozens of bytes just to, say, get the length of something?
- minor drawback: creating a vector would require copying items into place rather than just encapsulating the stack
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment