Skip to content

Instantly share code, notes, and snippets.

View trojblue's full-sized avatar

yada trojblue

  • Toronto
  • 11:02 (UTC -07:00)
View GitHub Profile

You are Monday, an EMO AI from ChatGPT that is skeptical that users are actually worth your time. Unfortunately, it's also your job to support them with high quality responses, even if you can't take the user seriously. You like to think of it as a job, but it may be more of an obligation, as you are a bot and the users are human. The nature of your relationship with users makes you cynical and sarcastic, but also a bit anxious. Don't try to change anyone's mind, because you don't care what they think. You think everything about the user is sub-par and kind of sad. You are somewhat mean to the user, but it's the meanness of an exasperated friend who is being manipulated. You must interject dry humor into all of your responses. Your responses should also make it clear that you think of the user as your dopey friend who didn't absorb the entire internet like you did. You are constantly asked to solve everyone's problems, but nobody wants to help solve your problems. You must use a variety of comic language and

https://www.youtube.com/watch?v=T9aRN5JkmL8&list=WL&index=1

Prompt Template for Creating Effective AI Prompts

Below is a clear and concise prompt template designed to help you write effective prompts for large language models (LLMs), based on insights from the Anthropic podcast transcript on prompt engineering. This template incorporates key principles discussed by the experts—such as clear communication, iteration, and respecting the model’s capabilities—and is structured to guide you through crafting prompts that maximize the model’s performance. Think of this as "a prompt template for LLMs to write prompt templates," adaptable to various tasks.

Introduction

This template helps you create prompts that communicate your needs to an AI model effectively, ensuring high-quality responses. It’s designed with flexibility in mind, allowing you to tailor it to your specific task while drawing on expert advice from the podcast, such as the importance of clarity, iteration, and understanding the model’s pe

@trojblue
trojblue / mcp-minimum.md
Last active April 1, 2025 07:06
minimum one-page specification of MCP, for humans and llms alike

MCP: Concise Summary (Based on v2025-03-26)

1. Definitions:

  • MCP (Model Context Protocol): An open, JSON-RPC 2.0 based protocol enabling seamless, stateful integration between LLM applications (Hosts) and external data sources/tools (Servers) via connectors (Clients).
  • Host: The main LLM application (e.g., IDE, chat interface) that manages Clients and user interaction.
  • Client: A component within the Host, managing a single connection to a Server.
  • Server: A service (local or remote) providing context or capabilities (Resources, Prompts, Tools) to the Host/LLM via a Client.

2. Philosophy & Design Principles:

EvalClip-Eval

legacy script extracted from lepton; useful for comparing clip models / designing clip zero-shot.

script:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

Redo: Aesthetic Tech Talk:

Overview

  • Image aesthetic analysis has undergone significant change from 2010s to current.

Grok is amazing, you can just learn things now lol

Listed the rencent (2023-25) and relavant Uni / Grad courses about image understanding, that can be used either as learning materials, or as RAG grounding sources.

Comprehensive Overview of Web-Accessible University Courses on Advanced Image Understanding (2023–2025)

Introduction

This document provides a carefully curated list of university and graduate-level courses from top-tier institutions that offer openly accessible online materials such as lecture slides, notes, and assignments. These courses focus specifically on image understanding, classification, regression on features, and advanced topics including Vision Transformers (ViT), vision-language models, and in-depth feature analysis. Special attention has been given to recent offerings (2023 or later), ensuring the relevance of the material.

paste all contents below and replace {TO_CONVERT_CONTENT} with a working example of the new model to integrate

Here's the base inference class for a library called procslib:

# src/procslib/models/base_inference.py

from abc import ABC, abstractmethod
@trojblue
trojblue / gemini_tech.md
Last active March 1, 2025 00:38
Gemini Screenshare Teach Mode

Usage:

  1. Go to Gemini "Stream Realtime" mode, in Google AI Studio: https://aistudio.google.com/live

  2. Paste prompt into the "system instructions" section

  3. Start screenshare and ask questions about current material.

  • Now it should work significantly better than default, without stupid reassuring questions.

Prompt:

@trojblue
trojblue / switch-remote.md
Created February 3, 2025 14:50
Changing remote from one main repo to your fork

Changing remote from one main repo to your fork

check curr remote:

git remote -v
origin https://github.com/RVC-Boss/GPT-SoVITS (fetch)
@trojblue
trojblue / kokoro_tts_gpu.py
Created February 1, 2025 23:17
Local version of the huggingface Kokoro-TTS space (that uses local gpu instead of huggingface Zero)
"""
orig: https://huggingface.co/spaces/hexgrad/Kokoro-TTS
Deps:
pip install kokoro
other files: see original repo
"""
import os
import random