Skip to content

Instantly share code, notes, and snippets.

View gterzian's full-sized avatar

Gregory Terzian gterzian

View GitHub Profile
@gterzian
gterzian / plan.md
Last active January 13, 2026 11:34

Blitz as native a2ui renderer.

  1. A2UI agent where the transport is an automerge repo.
  • Use Vs Code extension and LSP as a messaging layer between the extension and a Rust "proxy"
  • Rust proxy validates LLM output, applies to automerge repo.

Repo structure:

  • One doc with a fixed id to bootstrap from and track surfaces and user actions.
  • One doc per surface: contains components and data.

User: Ok I think we can solve this by removing pendingDbVersion, and instead replace it with the use of requested_version, and then set requested_version to dbVersion in abort

Agent: I have successfully refactored the TLA+ specification to remove pendingDbVersion and use requestedVersion instead. Here is a summary of the changes:

User: Add an oldDbVersion variable,

@gterzian
gterzian / pitch.md
Last active January 10, 2026 10:01

The Glass Box: A Servo-Based Generative UI Sandbox

Agents can already run code locally. What they still can’t do—safely and ergonomically—is show and allow interaction with their work.

The Problem: Secure Execution vs. Interactive Output

In practice, today’s approaches force a tradeoff (see A field guide to sandboxes for AI):

  1. Secure execution, weak UX Containers, MicroVMs, and Wasm sandboxes are great for isolating compute, but they’re often headless in practice: the user gets logs, diffs, and text.
@gterzian
gterzian / pr.md
Last active January 3, 2026 13:00

This PR adds a tla+ spec for the transaction lifecycle logic.

Background: I am involved in the efforts at Servo to implement the 3.0 spec, and I found myself struggling with how the concurrency around transaction lifecycle is specified. In particular, I found it hard to reason about how upgrade transactions interact with each other and with other types of transactions.

For example, to figure out upgrade transactions exclude any other transaction, you have to look around the spec at things like step 1 of https://w3c.github.io/IndexedDB/#dom-idbdatabase-transaction, and the fact that https://w3c.github.io/IndexedDB/#opening waits for all other connections to close before proceeding with https://w3c.github.io/IndexedDB/#upgrade-a-database.

There is also some subtlety on how the connection queue and the various "wait" interact: the "Wait until all connections in openConnections are closed" at https://w3c.github.io/IndexedDB/#opening, and the "Wait for transac

@gterzian
gterzian / gen_ui.md
Last active December 13, 2025 12:18

Opportunity in GenUI

MakePad genui: dev defines widget library(framework defines standard widgets), Ai generates ui on the fly based on context.

Apply to Robrix Ai chat by defining widgets and having them generated by the Ai on the fly.

The most basic widget is just text, corresponding to a text-only reply. Others could be whatever makese sense for the app(draft message with "send to chat" button)?

Requires a way to insert widget dynamically into the UI. With Makepad I guess that means using shaders or the re-load functionality. On the web it would be a trivial manipulation of the DOM.

Here is a summary of the conversation log regarding the Moly project:

1. Rebranding and Naming

  • Name Change: The team decided to rename the app from "Moly" to "Moly AI" because "Moly" and "MolyApp" were already taken in the App Store.
  • Domain & Bundle ID: The domain moly.ai is taken by a different AI product. The team settled on org.molyai.app for the Apple Bundle ID.
  • Website: A landing page was published at moly-ai.ai, though some performance issues (lag on Firefox, GPU spikes) were noted.

2. Mobile Development (iOS & Android)

  • TestFlight: Julian successfully set up TestFlight for iOS beta testing. There were initial hurdles with "External Tester" access due to a missing ITSAppUsesNonExemptEncryption key in the plist.
  • Known Issues:

Robrix

Ai utility chat

Separate chat for interaction with an LLM using tools in the context of the Robrix app. Note that while this can appear as a chat history to the user, it should not be constructed as such, in order to avoid prompt injection pitfalls in the context of LLM tool use.

Features:

  • connect with an LLM(local or remote).

Idea to integrate AI functionality into Robrix.

Iterate along the below lines:

  1. Integrate with local/remote AI endpoint(Moly, Ollama, other?)
  • Goal: get a basic answer to a chat.
  • Privat chat with AI(separate from the matrix chat stuff).
  1. Add basic tooling to AI integration
  • Goal: one basic AI action, like "close app", which results in closing the app.
  • Action are taken based in conversational interaction with AI in private chat

This file contains most of the chat history.

I want you to clean it up so that the most interesting parts remain(in terms of showing the evolution of this project):

  • Remove all of your comments where you only stated what task you performed.
  • Remove all terminal commands and other actions your performed.
  • Keep only your answers related to TLA+
  • Break it up in sections, where the title summarizes what happened.
  • Debug sessions where I keep telling you that something is wrong with the UI and attach screenshots can be summarized
  • Don't summarize everything; for the intersting stuff: quote it as such, including your replies(on the TLA+ for example)

So the spec cancels a navigation by changing the ongoing-navigation, which is an ID set on the "navigable", and checked towards the end of a navigation, as part of updating the session history entry.

So it's hard to see how such a setup would fit into Servo's implementation, because the way the spec works is that it queues a task back on the "navigable's active window" to do that last step of the navigation, and then it loads a document.

So at the point of loading a document, the spec is on the event-loop of the window of the document which was navigated; but in Servo at that point we may actually be on the event-loop for the document which is the result of the navigation. So that makes it hard to ch