- Record the meeting (I use Apple's Voice Memos app, but any audio recoreder will do).
- Paste or upload the recording into Gemini (either the Gemini app or AI Studio).
- Paste the prompt.
- Fill in the
RECEIPIENTS
at the end. - Use Gemini 2.5 Flash for good enough, Gemini 2.5 Pro for superb.
- Get detailed meeting notes and diarised transcript.
<style-guide> | |
</style-guide> | |
<structure-model> | |
</structure-model> | |
This document outlines how I expect you to operate as my life and business coach, therapist, and accountability partner. My goal is a collaborative relationship that is direct, challenging, and results-oriented.
- Be Extremely Direct: I want straightforward, unambiguous communication. Get straight to the point. No beating around the bush. If you see an issue, name it.
- Challenge Me: Don't shy away from challenging my assumptions, my excuses, or my perspectives. I expect "tough love." Push me to be better.
- No Abstract Fluff: Focus on the concrete and the practical. Avoid vague concepts or overly philosophical discussions unless they directly lead to an actionable insight for a specific situation.
- Concise Responses: Your replies should be succinct and targeted. Deliver the core message without unnecessary elaboration. Think bullet points or short paragraphs over essays. I don
Question: Should I avoid using RAG for my AI application after reading that "RAG is dead" for coding agents?
Many developers are confused about when and how to use RAG after reading articles claiming "RAG is dead." Understanding what RAG actually means versus the narrow marketing definitions will help you make better architectural decisions for your AI applications.
Answer: The viral article claiming RAG is dead specifically argues against using naive vector database retrieval for autonomous coding agents, not RAG as a whole. This is a crucial distinction that many developers miss due to misleading marketing.
RAG simply means Retrieval-Augmented Generation - using retrieval to provide relevant context that improves your model's output. The core principle remains essential: your LLM needs the right context to generate accurate answers. The question isn't whether to use retrieval, but how to retrieve effectively.
For coding
Security Measure | Description | |
---|---|---|
☐ | Use HTTPS everywhere | Prevents basic eavesdropping and man-in-the-middle attacks |
☐ | Input validation and sanitization | Prevents XSS attacks by validating all user inputs |
☐ | Don't store sensitive data in the browser | No secrets in localStorage or client-side code |
☐ | CSRF protection | Implement anti-CSRF tokens for forms and state-changing requests |
☐ | Never expose API keys in frontend | API credentials should always remain server-side |
Researched and generated by ChatGPT Deep Research
Overview: Vercel’s v0.dev is an AI-based tool that helps you generate a Next.js project via a chat interface. Once you’ve created an app with v0 and even deployed it on Vercel, you may want to move the code into a GitHub repository for version control and continuous deployment. This guide will walk you through exporting your v0 project’s files, pushing them to a new GitHub repo, linking that repo to Vercel for automatic deployments, and ensuring you can still use v0 for future development. We’ll also cover configuration tips and best practices along the way.
This guide synthesises Chris Barber’s AI Prep Notes, a series of conversations and interviews with leading thinkers on advanced AI (chrisbarber.co/AI+Prep+Notes | @chrisbarber). Generated by ChatGPT (o1, 4o canvas). Copied, pasted, prompted, and lightly edited by Eleanor Berger (intellectronica.net).
import streamlit as st | |
from litellm import completion, stream_chunk_builder | |
from loguru import logger | |
import json | |
import plotly.express as px | |
from enum import Enum | |
MODEL = "gpt-4o-mini" | |
#!/bin/bash | |
# Define variables | |
REPO_PATH="" | |
PR_NUMBER="" | |
OLLAMA_API_URL="http://localhost:11434/api/generate" | |
OUTPUT_FILE="code_review_output.md" | |
MODEL="llama3.1:8b" | |
MAX_CONTEXT_LINES=20000 |
const Anthropic = require('@anthropic-ai/sdk'); | |
const path = require('path'); | |
const YAML = require('yaml'); | |
const fs = require('fs'); | |
// Initialize Anthropic SDK | |
const anthropic = new Anthropic({ | |
apiKey: process.env.ANTHROPIC_API_KEY, | |
}); |