Skip to content

Instantly share code, notes, and snippets.

View brianjking's full-sized avatar
💭
Doin' cool stuff at BrandMuscle AI

Brian J King brianjking

💭
Doin' cool stuff at BrandMuscle AI
View GitHub Profile
@intellectronica
intellectronica / meeting-summary-prompt.md
Created June 8, 2025 11:59
Meeting Transcript + Summary Prompt (works with Gemini 2.5 Flash)

## Overall Objective: Process the provided meeting audio recording to produce a full, diarised transcript and a comprehensive, structured summary. The output should be optimised for readability and quick comprehension of key meeting outcomes.

## Input:

  • See attached audio files

## Known Participants:

  • See PARTICIPANTS (below)
@intellectronica
intellectronica / master-writing-prompt.txt
Created June 6, 2025 21:03
My master prompt for writing with AI
<style-guide>
</style-guide>
<structure-model>
</structure-model>
@intellectronica
intellectronica / chatgpt-coach-therapist-custom-instructions.md
Last active June 22, 2025 00:37
ChatGPT Coach / Therapist Custom Instructions

My Guiding Principles for You, My AI Coach/Therapist

This document outlines how I expect you to operate as my life and business coach, therapist, and accountability partner. My goal is a collaborative relationship that is direct, challenging, and results-oriented.

1. Communication Style: Direct & Focused

  • Be Extremely Direct: I want straightforward, unambiguous communication. Get straight to the point. No beating around the bush. If you see an issue, name it.
  • Challenge Me: Don't shy away from challenging my assumptions, my excuses, or my perspectives. I expect "tough love." Push me to be better.
  • No Abstract Fluff: Focus on the concrete and the practical. Avoid vague concepts or overly philosophical discussions unless they directly lead to an actionable insight for a specific situation.
  • Concise Responses: Your replies should be succinct and targeted. Deliver the core message without unnecessary elaboration. Think bullet points or short paragraphs over essays. I don

Question: Should I avoid using RAG for my AI application after reading that "RAG is dead" for coding agents?

Many developers are confused about when and how to use RAG after reading articles claiming "RAG is dead." Understanding what RAG actually means versus the narrow marketing definitions will help you make better architectural decisions for your AI applications.

Answer: The viral article claiming RAG is dead specifically argues against using naive vector database retrieval for autonomous coding agents, not RAG as a whole. This is a crucial distinction that many developers miss due to misleading marketing.

RAG simply means Retrieval-Augmented Generation - using retrieval to provide relevant context that improves your model's output. The core principle remains essential: your LLM needs the right context to generate accurate answers. The question isn't whether to use retrieval, but how to retrieve effectively.

For coding

@mattppal
mattppal / security-checklist.md
Last active June 28, 2025 03:34
A simple security checklist for your vibe coded apps

Frontend Security

Security Measure Description
Use HTTPS everywhere Prevents basic eavesdropping and man-in-the-middle attacks
Input validation and sanitization Prevents XSS attacks by validating all user inputs
Don't store sensitive data in the browser No secrets in localStorage or client-side code
CSRF protection Implement anti-CSRF tokens for forms and state-changing requests
Never expose API keys in frontend API credentials should always remain server-side

Migrating a v0.dev Project to GitHub and Vercel (Step-by-Step Guide)

Researched and generated by ChatGPT Deep Research


Overview: Vercel’s v0.dev is an AI-based tool that helps you generate a Next.js project via a chat interface. Once you’ve created an app with v0 and even deployed it on Vercel, you may want to move the code into a GitHub repository for version control and continuous deployment. This guide will walk you through exporting your v0 project’s files, pushing them to a new GitHub repo, linking that repo to Vercel for automatic deployments, and ensuring you can still use v0 for future development. We’ll also cover configuration tips and best practices along the way.

1. Exporting Project Files from v0

Understanding and Preparing for an AI Future

The Gospel According to Chris Barber


This guide synthesises Chris Barber’s AI Prep Notes, a series of conversations and interviews with leading thinkers on advanced AI (chrisbarber.co/AI+Prep+Notes | @chrisbarber). Generated by ChatGPT (o1, 4o canvas). Copied, pasted, prompted, and lightly edited by Eleanor Berger (intellectronica.net).


import streamlit as st
from litellm import completion, stream_chunk_builder
from loguru import logger
import json
import plotly.express as px
from enum import Enum
MODEL = "gpt-4o-mini"
@alwin-augustin-dev
alwin-augustin-dev / code_review.sh
Created October 17, 2024 06:31
Automated code review using local LLMs
#!/bin/bash
# Define variables
REPO_PATH=""
PR_NUMBER=""
OLLAMA_API_URL="http://localhost:11434/api/generate"
OUTPUT_FILE="code_review_output.md"
MODEL="llama3.1:8b"
MAX_CONTEXT_LINES=20000
@benjaminshafii
benjaminshafii / basel.js
Created August 10, 2023 00:40
Auto-generate OpenAPI spec w/ Anthropic Claude from any programming language
const Anthropic = require('@anthropic-ai/sdk');
const path = require('path');
const YAML = require('yaml');
const fs = require('fs');
// Initialize Anthropic SDK
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});