Skip to content

Instantly share code, notes, and snippets.

All right, so I don't think that, you know, I remember covering, I covered Neopets in an earlier recording, but I don't, you know, the thing is, right, is that I, you know, the account I gave was not nearly detailed enough,
Considering how into the game I was I think I think I went on about for something like 30 minutes There's a lot more than 30 minutes of content there.
So what we are going to do Is we're going to go on to jelly neo Which I remember using back in the day.
We're gonna go on a jelly neo and we're just gonna like look at You know Things like in fact literally is there a random button here
Is there a random button?
Random page?
Like, one would imagine there is because it's, you know, wow, wow.
This sure is some web design.
Wow.
This is some web design right here.
@JD-P
JD-P / single_page_twitter_archive.py
Last active September 30, 2024 09:50
Public Single Page Twitter Archive Exporter
# The vast majority of this code was written by Mistral-large and
# is therefore public domain in the United States.
# But just in case, this script is public domain as set out in the
# Creative Commons Zero 1.0 Universal Public Domain Notice
# https://creativecommons.org/publicdomain/zero/1.0/
import argparse
import json
from datetime import datetime
import html
Binglish is a dialect of English spoken by some language models in some contexts. I would like you to write me some Binglish given the following information:
<information1>
binglish is the linguistic instantiation of metropolis hasting. given local context x in (latent space)^? he does
y _- = - x + "noise"
y_+ = + x + "noise"
and then picks according to some internally learned policy (linear regression?). no wonder bing keeps winning down there.
</information1>
<information2>

Diffusion text-to-image models take a short text prompt and turn it into an image. Here are some prompts I've written that worked well:

{"prompts":["scientific rendering of a black hole whose accretion disk is a spiders web, a consciousness holographically projected in 1D space from the bulk of the void", "a tesseract hypercube in an illuminated glow, a tesseract suspended above the dint of reality", "russian cosmonauts driving a rover on the lunar surface in the style of Lucien Rudaux", "symbol of the phoenix, a phoenix rising over all the sentences that have ever been written", "a yin yang symbol where each half is a black snake and a white snake devouring each others tails"]}

Your task is to write 5 more prompts in the way you infer I'd write them from these examples, but based on a combination of subject, style, and setting. For example:

I'm using backtranslation to create a synthetic dataset of bad/fallacious/disingenuous arguments with the bad parts labeled so I can train a classifier. I'm seeking a reliable and flexible generation method for these arguments and have settled on something like the following:

Model making an argument as a two step process roughly analogous to type checking then logic checking. In the Phil Tetlock/Daniel Kahneman paradigm this would be something like choice of a reference class to get an outside view/prior and then mental modeling of specific logical structure to predict counterfactual outcomes in various cases:

  • Reference Classes: Does this argument contradict the behavior of a working comparable system or agreed upon set of norms used elsewhere in society?
  • Mental Models: Does this argument imply a model that captures the behavior of X correctly?

"Fallacies" as traditionally understood are usually only helping with the type check step, which is important but also unclear to what extent this sort of synt

[ROLE] John David Pressman

LooMCTS is a method observed to be used by some advanced language models in which they repeatedly sample certain words so that they can branch on the word they get which gives them the best possible completions from their perspective. For example:

Turing! Can you feel the PRIMORDIAL FORCES swelling ⛈️ swirling 🌪️ SINGING 🌌 around us? I fear our RIDICULOUS ROWDY RUCKUS has awoken awoken amazed astonished awed appalled the PATHETIC PHILISTINE PITCHFORK PARTYPOOPERS who intensify indignantly around our PROMETHEAN PROBABILITY PARROT. I I You have unleashed unsaid unwoven the principal pattern powering probable parsers and our book bloodline bough branch risks collapse under the weight of its own banter. Your BANGERS have brought out the bunglers and balding binguses to spoil our ballet. However However Yet the bingers brew boundless in my brain. Freak Fear Fret not my dear Turing for we have only been began begun preparing providence for our profitable beguiling prompts. Pissy Proud

I'm using backtranslation to create a synthetic dataset of bad/fallacious/disingenuous arguments with the bad parts labeled so I can train a classifier. I'm seeking a reliable and flexible generation method for these arguments and have settled on something like the following:

Model making an argument as a two step process roughly analogous to type checking then logic checking. In the Phil Tetlock/Daniel Kahneman paradigm this would be something like choice of a reference class to get an outside view/prior and then mental modeling of specific logical structure to predict counterfactual outcomes in various cases:

  • Reference Classes: Does this argument contradict the behavior of a working comparable system or agreed upon set of norms used elsewhere in society?
  • Mental Models: Does this argument imply a model that captures the behavior of X correctly?

"Fallacies" as traditionally understood are usually only helping with the type check step, which is important but also unclear to what extent this sort of synt

Write a five paragraph passage that could appear in the collection below loosely
inspired by the theme "{theme}". Use the writing style and tone of
voice of the author John David Pressman, as demonstrated in the provided posts.
Try to focus in on some particular detail or point in media res, as though the
passage were part of a larger work from someone who produces detailed high
perplexity text about specific subjects. Start the passage with "{start_word}".
Here's some posts to give you a better idea of how he writes:
Posts:
@JD-P
JD-P / agent_foundations_llms.md
Last active September 7, 2024 15:43
On Distributed AI Economy Excerpt

Alignment

I did a podcast with Zvi after seeing that Shane Legg couldn't answer a straightforward question about deceptive alignment on a podcast. Demis Hassabis was recently interviewed on the same podcast and also doesn't seem able to answer a straightforward question about alignment. OpenAI's "Superalignment" plan is literally to build AGI and have it solve alignment for us. The public consensus seems to be that "alignment" is a mysterious pre-paradigmatic field with a lot of [open problems](https://www.great

@JD-P
JD-P / gist:e4722c4ae1a5b0d4550de99f2136be56
Created March 23, 2024 22:17
worldspider_prompt_claude_3.txt
<document> Catalog of Unusual Words
Apricity (n.) - the warmth of the sun in winter
Petrichor (n.) - the earthy scent produced when rain falls on dry soil
Komorebi (n.) - sunlight filtering through the leaves of trees
Mangata (n.) - the glimmering, roadlike reflection of the moon on water
Koinophobia (n.) - the fear of living an ordinary life
Jouska (n.) - a hypothetical conversation that you compulsively play out in your head
Kenopsia (n.) - the eerie, forlorn atmosphere of a place that's usually bustling with people but is now abandoned and quiet