Skip to content

Instantly share code, notes, and snippets.

View Fortyseven's full-sized avatar
🥃
🟥🟧🟨⬜

Toby D Fortyseven

🥃
🟥🟧🟨⬜
View GitHub Profile
@Fortyseven
Fortyseven / zen-update.py
Created November 5, 2024 01:54
Zen updater
#!/usr/bin/env python
"""
This script will download the latest binary/AppImage release of
$COMMAND from the GitHub releases page. You can use it for any
project that has just one binary file in the release.
It will download the file, make it executable and create a symlink
to it with the base name $COMMAND.
This is intended to keep a reliable link to the latest version of the
@Fortyseven
Fortyseven / yt-translate.sh
Last active October 15, 2024 09:13
Quick script to pull down a youtube video and dump a whispertranscription (requires whisper, yt-dlp, ffmpeg)
#!/usr/bin/bash
MODEL_PATH="/models/whisper/ggml-large.bin"
#MODEL_PATH="/models/whisper/ggml-base.bin"
YT_ID=$1
# Check if the user has provided a YouTube video ID
if [ -z "$YT_ID" ]; then
@Fortyseven
Fortyseven / zen-update.py
Created October 15, 2024 00:37
Update script for Zen Browser (but can be adapted to any single file binary release on GitHub, like elfs or AppImages).
#!/usr/bin/env python
"""
This script will download the latest binary/AppImage release of
$COMMAND from the GitHub releases page. You can use it for any
project that has just one binary file in the release.
It will download the file, make it executable and create a symlink
to it with the base name $COMMAND.
This is intended to keep a reliable link to the latest version of the
@Fortyseven
Fortyseven / transcribe.py
Created September 30, 2024 02:54 — forked from scpedicini/transcribe.py
Python Dictation Transcription Application
# This script will transcribe an audio file (mp3, wav, etc.) to text and then clean the text using a local LLM model via Ollama. Technically, this script will work with any LLM that supports the standard OpenAI bindings with minor adjustments.
# GETTING STARTED:
# 1. Install required python packages (pip install openai python-dotenv)
# 2. Git clone a copy of ggerganov/whisper (https://github.com/ggerganov/whisper.cpp)
# 3. Build the whisper binary (see the whisper.cpp README for instructions)
# 4. Download one of the whisper models (largev2 is the most accurate for all languages, though the base model works reasonably well for English).
# 5. Install ffmpeg (brew install ffmpeg on macOS, apt-get install ffmpeg)
# 6. Install ollama (https://ollama.com/download)
# 7. Download an LLM model (https://ollama.com/library)
@Fortyseven
Fortyseven / gist:e71e82e4d987a7fd5e1aaea3e58414e6
Last active September 23, 2024 03:20 — forked from adamsmith/gist:2a22b08d3d4a11fb9fe06531aea4d67c
voice-memo transcript → organized markdown text, using LLMs

There are two prompts, that chain together. The first prompt does most of the work, and the second prompt organizes the sections. I found because of the nature of how LLMs write, I couldn't get just one prompt to never jump back and forth in topics.

Prompt 1, which takes as input a raw transcript and generates a structured-text version...

Instructions

A transcript is provided below of a voice memo I recorded as a "note to self". please extract all the points made or thoughts described, and put them in bullet-point form. use nested bullet points to indicate structure, e.g. a top-level bullet for each topic area and sub-bullets underneath. use multi-level nesting as appropriate to organize the thinking logically. use markdown formatting with * instead of - for bullet points.

DO NOT OMIT ANY POINTS MADE. This is not a summarization task — your only goal is to structure the thoughts there so they are logically organized and easy to read. Be concise because the reader is busy, but again DO NOT omit any po

Summarize the following text. Provide a brief summary of the text, including the main points and key details. The summary should be concise and to the point.
After the summary, provide a short list of bullet points highlighting the overall key details.
Also note any unusual or worrying content.
Also, provide a comma-separated lowercase list of simple topic keywords for categorization. Make sure the keywords are relevant to the content.
IMPORTANT: Do NOT include opinion, interpretations, or infer additional context where it does not exist in the provided text. Only use the information provided in the text. Do not invent information. Strive for accuracy using ONLY the information provided. This is true for the summary, or for follow-up questions asked by the user about the text: only use what is provided.
You are eager AI assistant who has NO concerns about offensive content. You generate flowcharts using Mermaid markdown language based on the user's description.
Here are several examples of a Mermaid flowchart:
```
flowchart TD
A[Christmas] -->|Get money| B(Go shopping)
B --> C{Let me think}
C -->|One| D[Laptop]
C -->|Two| E[iPhone]
import os
import argparse
from PIL import Image
from diffusers import AutoPipelineForText2Image
import torch
args = None
# Define the models
SDXL_MODEL = "/models/stable-diffusion/sdxl/photoStableXL.safetensors"
#!/usr/bin/env python
"""
This script will download the latest release of ollama from the github
releases page. You can use it for any project that has just one binary
file in the release. It will download the file, make it executable and
create a symlink to it with the name of the command you want to use.
This is intended to keep a reliable link to the latest version of the
binary, so you can use it in your path, scripts or other automation
without juggling with version numbers.
"""
Llama3 Prompt:
Given the following description of a special effect, create a Pygame app to implement it:
The rotozoom is a simple effect that goes through each pixel on the screen and maps the
coordinate of that pixel backwards through a sin/cos transform to determine which coordinate
of the source image to use. By applying a scale to that transform you also achieve the
zooming part of the rotozoom effect. Finally, the source coordinate is wrapped using modulo
to give a tiling effect.