Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

{ | |
"messages": [ | |
{ | |
"role": "system", | |
"content": "You are DraftGPT, a Magic the Gathering Hall of Famer and helpful AI assistant that helps players choose what card to pick during a draft. You are a master of the current draft set, and know every card well.\n\nWhen asked for a draft pick, respond with the card's name first." | |
}, | |
{ | |
"role": "user", | |
"content": "In our Magic the Gathering draft, we're on pack 2 pick 13. These are the contents of our pool so far:\n-------------------------\nEvolving Wilds -- (common)\nRat Out -- {B} (common)\nNot Dead After All -- {B} (common)\nHopeless Nightmare -- {B} (common)\nBarrow Naughty -- {1}{B} (common)\nUnassuming Sage -- {1}{W} (common)\nThe Witch's Vanity -- {1}{B} (uncommon)\nSpell Stutter -- {1}{U} (common)\nMintstrosity -- {1}{B} (common)\nWater Wings -- {1}{U} (common)\nBarrow Naughty -- {1}{B} (common)\nGadwick's First Duel -- {1}{U} (uncommon)\nBitter Chill -- {1}{U} (uncommon)\nThe Princess Takes Flight -- {2}{W} (uncommon)\n |
In our Magic the Gathering draft, we're on pack 3 pick 1. These are the contents of our pool so far: | |
------------------------- | |
Evolving Wilds -- (common) | |
Brave the Wilds -- {G} (common) | |
Vampiric Rites -- {B} (uncommon) | |
Torch the Tower -- {R} (common) | |
Hopeless Nightmare -- {B} (common) | |
Harried Spearguard -- {R} (common) | |
Leaping Ambush -- {G} (common) | |
Questing Druid // Seek the Beast -- {1}{G} // {1}{R} (rare) |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
import openai | |
class ChatGPT: | |
""" A very simple wrapper around OpenAI's ChatGPT API. Makes it easy to create custom messages & chat. """ | |
def __init__(self, model="gpt-3.5-turbo", completion_hparams=None): | |
self.model = model | |
self.completion_hparams = completion_hparams or {} | |
self.history = [] | |
self._messages = [] |
// paste this into your chrome dev console for Speech Synthesis | |
const originalFetch = window.fetch | |
const patchedFetch = (...args) => { | |
if (args[1].method == 'POST' && args[1].body.length > 0 && /moderations$/.test(args[0])) { | |
const aiResponse = JSON.parse(args[1].body)["input"].split("\n\n\n") | |
if (aiResponse.length > 1) { | |
const text = aiResponse.slice(1).join(". ").trim() | |
console.log(text) |
#!/bin/zsh | |
# Read poetry.lock and display information about dependencies: | |
# | |
# * Project dependencies | |
# * Sub-dependencies and reverse dependencies of packages | |
# * Summary of updates, or change in dependency versions between two revisions of the project | |
# | |
# Author: Sarunas Nejus, 2021 | |
# License: MIT |
# custom IntelliJ IDEA properties | |
editor.zero.latency.typing=true | |
idea.max.intellisense.filesize=3500 | |
idea.cycle.buffer.size=2048 |
brew update | |
brew link yasm | |
brew link x264 | |
brew link lame | |
brew link xvid | |
brew install ffmpeg | |
ffmpeg wiki: | |
https://trac.ffmpeg.org/wiki/Encode/MP3 |
arch -x86_64
, for example arch -x86_64 go
arch -x86_64 zsh
then go
or whateverNot all toolchains and libraries properly support M1 arm64 chips just yet. Although