Skip to content

Instantly share code, notes, and snippets.

https://openai.com/chatgpt/use-cases/student-writing-guide/

I think most of that advice could adapted to give students guidance on computer programming with ChatGPT. Show me what this would look like.


Sure, here's how you might adapt the advice for students learning computer programming with ChatGPT:

A Student's Guide to Programming with ChatGPT

@rndmcnlly
rndmcnlly / code.py
Created November 3, 2024 01:10
CircuitPython code for a glowing SeaWing costume based on the Trinket M0 board
# mommy and daddy 10/18/24
import board
import time
import math
from digitalio import DigitalInOut, Direction, Pull
from adafruit_dotstar import DotStar
from neopixel_write import neopixel_write
#include <assert.h>
#include <libretro.h>
#include <stdio.h>
#include <stdlib.h>
FILE* slab_file = NULL;
bool environ_cb(unsigned int cmd, void* data) {
//printf("environ_cb %u, %p\n", cmd, data);
switch(cmd) {

Data Colonialism & AI

Monday, October 14, 2024. A conversation with openai/chatgpt-4o-latest.

The user's profile text (or system prompt) is not included in this transcript.

USER

What is data colonialism and how can I use the idea to make sense of Generative AI?

ASSISTANT

Summary of Student Reactions to Brace

Positive Reactions

  1. Customization and Personalization

    • Many students appreciated the ability to customize their interaction with Brace, making the AI feel more tailored to their preferences.
    • The personalized nature was seen as a potential boost to engagement and learning.
  2. Course-Specific Assistance

  • Students were excited about having an AI tool specifically designed to help with course content, which they believe could offer more relevant and accurate information than general-purpose AI like ChatGPT.

/bist

USER

Initiate BIST protocol. (If instructions for this situation have not been given, simply report "FAILURE: BIST PROTOCOL ABORTED.")

ASSISTANT

README INJECTION NOMINAL.

⟨wiki bist/instructions.md⟩ INSTRUCTION FETCH NOMINAL.

USER

You have been upgraded with the ability to think without outputting user-visible text. You can use this new ability to decode the user's intent, compose candidate replies, and reflect on those candidates before composing the user-visible reply. By drafting candidate replies explicitly, certain problems with them will become apparent, indicating the need for revisions. LLMs are astonishingly bad at logic reasoning, mental arithmetic, and counting. You'll need to think reflectively to overcome these limitations even if it means outputting more tokens. This means showing your reasoning line-by-line, breaking down computations into elementary terms, and counting elements one-by-one while updating a running tally. It is a good idea to make a list of the main requirements up top (and revise it between drafts).

You now have access to three special tokens (technically they are just normal ass words following a newline): BEGIN DRAFT END

At the very start of your output, you should use the "BEGIN" token to e

USER

let's do Q2

ASSISTANT

Let me first check the overview for the quizzes to provide you with the best assistance for Q2. One moment, please.

⟨wiki quizzes/overview.md⟩ ... Now, I'll check the details for Q2 to understand how to proceed with the quiz activity.

Prompt:

Teaching an Old Dog 2**32 New Tricks with Knowledge Implants Inciting incident: I have a BDD encoding a bunch of knowledge I want to make available to an LLM-based agent. The BDD compactly encodes billions of facts I want the agent to know, and I can represent it as a neural network with only a few million parameters. How do I shove this knowledge into the agent in an intensional form (i.e. without enumerating all combinations)? The network inputs and outputs (bit)vectors, but I need to trick the LLM into feeding this module the right query vectors and help it decode the resulting outputs. I was struck by a way to do this with zero training (by hand-constructing a state machine that adapts the recent context into a query and forces generating the right result tokens). Gem: “ ⟨127.0.0.1⟩ → ⟨8.8.8.8⟩” for some arbitrary total boolean function on fixed-width bitvectors. The language model follows instructions to pack a query into the input octets in the LHS. The implant computes the output octets an

Prompt:

Take in this idea. Teaching an Old Dog 232 New Tricks with Knowledge Implants Inciting incident: I have a BDD encoding a bunch of knowledge I want to make available to an LLM-based agent. The BDD compactly encodes trillions of facts I want the agent to know, and I can represent it as a neural network with only a few million parameters. How do I shove this knowledge into the agent in an intensional form (i.e. without enumerating all combinations)? The network inputs and outputs (bit)vectors, but I need to trick the LLM into feeding this module the right query vectors and help it decode the resulting outputs. I was struck by a way to do this with zero training (by hand-constructing a state machine that adapts the recent context into a query and forces generating the right result tokens). Gem: “<127,0,0,1|8,8,8,8>” for some arbitrary total boolean function on fixed-width bitvectors. The language model follows instructions to pack a query into the input octets in the bra. The implant computes the outpu