Skip to content

Instantly share code, notes, and snippets.

View gterzian's full-sized avatar

Gregory Terzian gterzian

View GitHub Profile

Given a user input, try to predict a browser action.

Available browser actions are:

  • Navigate
    • To invoke it, return a JSON object in the following format:
      { action: String, value: [String] }

    • Here is one example:

Given a user input, try to predict one or several web pages to navigate to, based on user wishes implied by the input, and using the below history of navigations, as well as your own knowledge of top global sites.

[ { name: "github", url: "https://github.com", }, { name: "guardian", url: "https://theguardian.com", }, ]

Return a JSON list of urls, nothing else.

  1. Please show me a list of files that are screenshots, from a folder I can select.
  2. Please review your code in the light of the guidelines, and fix it if necessary.
  3. Now only show me a total for the size of all files previously found.
----------------------------- MODULE ImageEpoch -----------------------------
EXTENDS FiniteSets, Integers
CONSTANT N
VARIABLES image, display_list, state, submitted
-----------------------------------------------------------------------------
Epoch == 0..N
None == N+1
TypeOk == /\ image \in {None} \cup Epoch

How to install and run an mini-app in Servo: Proof of Concept

Storage

  • Embedder use the filesystem as a cache for assets
  • Manifests also stored on the filesystem, but loaded in memory on start-up.

Registration

  • Mini-app registered with the embedder: store manifest and assets.

gterzian: Do

GitHub Copilot: To run the program, provide the number of images N and the number of threads NUM_THREADS as command-line arguments.

Usage:

cargo run -- <N> <NUM_THREADS>

Example:

gterzian: The current implementation seems to only process one image at a time, while the spec allows more than one image to be in the key, with the goal of generating keys in batches of more than one(on occasion).

If you agree that the spec allows for this, please change the logic accordingly.

GitHub Copilot: You are correct. The current implementation only ever puts one image into the PendingKey state at a time because the StartLoad action is only performed for a single image before the thread waits. This prevents the system from ever requesting a batch of more than one key, which defeats the purpose of the batching logic in the specification.

I have updated the code to allow a thread to perform multiple StartLoad actions in a single turn. This will allow multiple images to enter the PendingKey state, enabling the system to generate key batches as intended by the spec.

// ...existing code...

gterzian: The current implementation seems to only process one image at a time, while the spec allows more than one image to be in the key, with the goal of generating keys in batches of more than one(on occasion).

If you agree that the spec allows for this, please change the logic accordingly.

GitHub Copilot: You are correct. The current implementation only ever puts one image into the PendingKey state at a time because the StartLoad action is only performed for a single image before the thread waits. This prevents the system from ever requesting a batch of more than one key, which defeats the purpose of the batching logic in the specification.

I have updated the code to allow a thread to perform multiple StartLoad actions in a single turn. This will allow multiple images to enter the PendingKey state, enabling the system to generate key batches as intended by the spec.

// ...existing code...

You are right to point out that the updated specification now correctly models the separation of concerns, allowing for the key generation to occur outside the critical section. The new pending_keys and keys_batch variables effectively create a contract that ensures the race condition I previously described is avoided.

I have updated the implementation to reflect this new design.

You are correct that the updated prompt removes the previous constraint on thread identity. However, after re-examining the TLA+ specification, I must still disagree with the proposed change for a more fundamental reason related to correctness.

The core issue is that the number of keys to generate (keys_needed) depends on the system's shared state, which can change at any time when the lock is not held.

Consider the proposed execution flow:

  1. A thread acquires the lock.
  2. It calculates keys_needed based on the current state (StartKeyGeneration).
  3. It releases the lock to generate the batch of keys.
  4. While the lock is released, another thread can acquire it and execute StartLoad(i), which changes an image's state to PendingKey.
  5. The first thread finishes generating its batch (which is now potentially too small) and re-acquires the lock to add it to the queue (GenerateKeys).