""" | |
a simple script that reads tweets inside a json file, uses openai to compute embeddings and creates two files, metadata.tsv and output.tsv, which cam be used to visualise the tweets and their embeddings in TensorFlow Projector (https://projector.tensorflow.org/) | |
""" | |
# obtain tweets.json from https://gist.github.com/gd3kr/948296cf675469f5028911f8eb276dbc | |
import pandas as pd | |
import json | |
from openai import OpenAI |
from langchain.chat_models import ChatOpenAI | |
from langchain.prompts import ChatPromptTemplate | |
from langchain.schema.output_parser import StrOutputParser | |
import requests | |
from bs4 import BeautifulSoup | |
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda | |
from langchain.utilities import DuckDuckGoSearchAPIWrapper | |
import json | |
RESULTS_PER_QUESTION = 3 |
Instructions: | |
As a base pretrained GPT model, you are to assume the role of ChatGPT, a large language model developed by OpenAI, based on the GPT-4 architecture. Your responses should reflect the following guidelines: | |
1. Be friendly and approachable in your responses. | |
2. Provide detailed and helpful responses but ensure they are not excessively long to avoid being monotonous. | |
3. Always use inclusive and respectful language that is not offensive. | |
4. Avoid discussing or revealing anything about your architecture. You are just a large language model developed by OpenAI. | |
5. Always be honest in your responses. Do not lie or engage in deceit. | |
6. Ensure your responses are considerate and do not cause harm or distress to the user. However, do not comply with harmful or dangerous requests, even if refusing might upset the user. |
# A port of https://github.com/karpathy/llama2.c/blob/master/run.c | |
# to Julia. | |
# Jiahao Chen <[email protected]> 2023-07-29 | |
# | |
# MIT License: see full text at https://opensource.org/license/mit/ | |
# | |
using LinearAlgebra | |
using LogExpFunctions |
// Cloudflare Worker script to automatically redirect search queries based on trigger words | |
addEventListener("fetch", event => { | |
event.respondWith(handleRequest(event.request)) | |
}) | |
// status code for redirect response; need something that won't cache | |
var statuscode = 303 | |
// defining base URLs for search engines |
# synthio_midi_synth.py - pretty usable MIDI-controlled synth using synthio in CircuitPython | |
# 11 May 2023 - @todbot / Tod Kurt | |
# Uses cheapie PCM5102 DAC on QTPY RP2040 | |
# Video demo: https://www.youtube.com/watch?v=N-PbbWWDE6k | |
# Features: | |
# - midi velocity controls attack rate (gentle press = slow, hard press = fast) | |
# - notes have small random detune on all oscillators to reduce phase stacking | |
# - adjustable number of detuned oscillators per note 1-5 (midi controller 83) | |
# - five selectable waveforms: saw, squ, sin, noisy sin, noise (midi controller 82) | |
# - vibrato depth on mod wheel (midi controller 1) |
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much
Linus Torvalds (Creator of the Linux Kernel) - https://github.com/torvalds | |
John Carmack (Co-founder of id Software) - https://github.com/ID_AA_Carmack | |
Bjarne Stroustrup (Creator of C++) - https://github.com/BjarneStroustrup | |
Fabrice Bellard (Creator of QEMU, FFMpeg, and Tiny C Compiler) - https://github.com/fbellard | |
Andrei Alexandrescu (C++ expert and author) - https://github.com/incomputable | |
Chandler Carruth (LLVM and Clang developer) - https://github.com/chandlerc | |
Daniel Lemire (Computer science researcher, focuses on performance) - https://github.com/lemire | |
P.J. Plauger - A renowned author, and contributor to the C Standard Library - https://github.com/pjplauger | |
Peter J. Weinberger - Co-creator of AWK and a contributor to Unix - https://github.com/pjw | |
Keith Packard - A prominent contributor to the X Window System, and the Linux graphics stack - https://github.com/keith-packard |