An LLM is quite efficient at converting from one language to another. The source language can be very fuzzy, it doesn’t need to be a “real” programming language, syntactically correct, semantically correct, or use existing APIs. It just needs to convey the right base concept to allow the LLM to fetch the answer in the right target language.
Note the SYSTEM prompt used here, which helps cut out the usual GPT-4 verbiage. This helps to keep the model on track to generate proper code. Also note how I start a new chat session for every experiment.
It is also worthwhile and instructive to regenerate the output multiple times to understand how much is due to luck vs how robust the prompt itself is.
This is the most interesting usecase. Minimal prompting leads to very good results. It also saves on tokens.
What is interesting about prompting in pseucode is noticing how short syntactical and lexical changes can have a huge effect. Because so many programming languages are whitespace sensitive, whitespace can make a huge difference!
Adding additional parameters and their name can drastically change the output, without adding explicit programming languages.
Let’s home in on the use of the HTTP fetch use case, which has a decent amount of examples across programming languages. In particular, let’s focus on the Go language, which tends to have a single idiomatic way of doing something, instead of a plethora of approaches and libraries.
Moving the go token around causes the LLM to be confused, and generate full programs.
Note how actually writing this out in full makes the model confused enough to not know what we are asking for.
We need to add an explicit verb.
Let’s see what happens if we replace the type token with the interface token.
We can even ask it to mix up languages. Specifying a go idiom (channels for asynchronous results) causes it to generate python async code. Note that switching from gpt-4 to gpt-3.5 works better for these more experimental use cases, as gpt-4 tries very hard to produce “correct code” and will often ignore parts of your prompt altogether.
Doing this, you might at times see really interesting mashups, like go with a python syntax. Those are hard to reproduce.
Using the completion API also leads to interesting results. How interesting is up to you. I like to approach this as a way to iterate through different DSL patterns.