Apologies for the snarky title, but there has been a huge amount of discussion around so called "Prompt Engineering" these past few months on all kinds of platforms. Much of it is coming from individuals who are peddling around an awful lot of "Prompting" and very little "Engineering".
Most of these discussions are little more than users finding that writing more creative and complicated prompts can help them solve a task that a more simple prompt was unable to help with. I claim this is not Prompt Engineering. This is not to say that crafting good prompts is not a difficult task, but it does not involve doing any kind of sophisticated modifications to general "template" of a prompt.
Others, who I think do deserve to call themselves "Prompt Engineers" (and an awful lot more than that), have been writing about and utilizing the rich new eco-system
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
trigger: | |
- main | |
pool: | |
vmImage: ubuntu-latest | |
variables: | |
npm_config_cache: $(Pipeline.Workspace)/.npm | |
steps: | |
- task: Cache@2 |
/** | |
*Submitted for verification at Etherscan.io on 2021-09-05 | |
*/ | |
// SPDX-License-Identifier: MIT | |
pragma solidity ^0.8.0; | |
/// [MIT License] | |
/// @title Base64 |
https://danluu.com/web-bloat/ | |
https://danluu.com/octopress-speedup/ | |
https://tonsky.me/blog/pwa/ | |
https://hpbn.co/ | |
https://idlewords.com/talks/website_obesity.htm | |
https://blog.codinghorror.com/an-exercise-program-for-the-fat-web/ | |
https://developers.google.com/speed | |
https://mobile.twitter.com/danluu/status/1252792626257866754 (Google AV1 announcement) | |
https://developers.google.com/search/blog#speed-and-google-search (1st link from Google Pagespeed Insights, "Read the latest Google Search Central blog posts about performance & speed." | |
https://calendar.perfplanet.com/2020/the-mythical-fast-web-page/ |
nothing = new Proxy(Object.freeze(Object.create(null)), { | |
get(target, property) { | |
if (property === Symbol.toPrimitive) return () => undefined | |
return nothing | |
}, | |
set() { | |
return nothing | |
} | |
}) |
I was drawn to programming, science, technology and science fiction | |
ever since I was a little kid. I can't say it's because I wanted to | |
make the world a better place. Not really. I was simply drawn to it | |
because I was drawn to it. Writing programs was fun. Figuring out how | |
nature works was fascinating. Science fiction felt like a grand | |
adventure. | |
Then I started a software company and poured every ounce of energy | |
into it. It failed. That hurt, but that part is ok. I made a lot of | |
mistakes and learned from them. This experience made me much, much |
Notes for gregg pollack's https://www.youtube.com/watch?v=9lPjY-JGZDY
Tools: AUDIO > Video. Use a good mic. Film yourself w/ natural light & good backdrop when explaining ideas. Use Screenflow, high res monitor (for zooming). Hire a film editor. Keynote/powerpoint. Use Animations! Get everything captioned!
Instructional Design:
- Describe the problem first, dont just teach syntax.
- Give Learning Objectives.
- Show them what you're going to build, what they need to know
- teach with visuals where needed
- Show more than one example