source | gistUrl | gistFile |
---|---|---|
github-gist |
Todoist.md |
You need to add a developer API key value to able to access your Todoist data. You can get one at https://app.todoist.com/app/settings/integrations/developer.
In the Generative AI Age your ability to generate prompts is your ability to generate results.
Claude 3.5 Sonnet and o1 series models are recommended for meta prompting.
Replace {{user-input}}
with your own input to generate prompts.
Use mp_*.txt
as example user-input
s to see how to generate high quality prompts.
//> using scala 3.3 | |
import scala.util.{Failure, NotGiven, Success, Try, boundary} | |
import boundary.{Label, break} | |
import scala.annotation.targetName | |
/** | |
* Proof of concept implementation of a syntax similar to Kotlin and | |
* typescript. Within the context provided by [[getEither]], you can call | |
* `?` on any optional/failable type (currently supports [[Option]], |
Thank you for your work on Bleep! Developing a first-class Scala build experience that goes beyond Sbt is a daunting task, but one I believe is very much necessary for the health of industrial Scala.
Some preliminary thoughts:
Thank you for having a look and sharing your thoughts around this, greatly appreciated!
- I love that you went build-as-data. Tooling matters, and the build tool is the center of the entire tooling pipeline. Other tools need the ability to both read and write build data, and this can only be done economically with build-as-data.
Bleep is an experiment of thow far we can take build-as-data. So far I see no limits π
I just read this trick for text compression, in order to save tokens in subbsequent interactions during a long conversation, or in a subsequent long text to summarize.
It's useful to give a mapping between common words (or phrases) in a given long text that one intends to pass later. Then pass that long text to gpt-4 but encoded with such mapping. The idea is that the encoded version contains less tokens than the original text. There are several algorithms to identify frequent words or phrases inside a given text, such as NER, TF-IDF, part-of-speech (POS) tagging, etc.
// ---------------------------------------------------------------- | |
// βββββ β | |
// βββββ ββ β | |
// ββ ββ ββββββ βββ ββββββββββ ββ βββ ββββββββββ | |
// ββββ βββ ββ βββββββ βββββββ β ββ ββββ βββββββ | |
// ββ βββ ββ βββββββ ββ ββββ β ββ ββββ ββ ββββ | |
// βββββ βββ βββ βββββββ βββββββββββββββββββ βββββββ | |
// βββββββ ββ ββ βββ ββ ββ βββ βββββ βββ ββ ββ βββ | |
// ---------------------------------------------------------------- |