Skip to content

Instantly share code, notes, and snippets.

View morganmcg1's full-sized avatar
πŸ’­
Trying to ML

Morgan McGuire morganmcg1

πŸ’­
Trying to ML
View GitHub Profile
@morganmcg1
morganmcg1 / system_design
Created August 17, 2025 09:16
System Design - https://www.seangoedecke.com/good-system-design/, software best practices tips
sean goedecke
Everything I know about good system design
I see a lot of bad system design advice. One classic is the LinkedIn-optimized β€œbet you never heard of queues” style of post, presumably aimed at people who are new to the industry. Another is the Twitter-optimized β€œyou’re a terrible engineer if you ever store booleans in a database” clever trick1. Even good system design advice can be kind of bad. I love Designing Data-Intensive Applications, but I don’t think it’s particularly useful for most system design problems engineers will run into.
What is system design? In my view, if software design is how you assemble lines of code, system design is how you assemble services. The primitives of software design are variables, functions, classes, and so on. The primitives of system design are app servers, databases, caches, queues, event buses, proxies, and so on.
This post is my attempt to write down, in broad strokes, everything I know about good system design. A lot of the concrete judgment calls do come
import os
import weave
from langchain_google_genai import ChatGoogleGenerativeAI
os.environ["GOOGLE_API_KEY"] =
GEMINI_MODEL_NAME = 'gemini-2.5-flash'
CUSTOM_MODEL_NAME = 'lc_' + GEMINI_MODEL_NAME
weave_client = weave.init("cost-testing")
@morganmcg1
morganmcg1 / custom_costs.py
Created July 17, 2025 20:46
Weave custom costs
# uv pip install weave langchain-google-genai -qq
import os
import weave
from langchain_google_genai import ChatGoogleGenerativeAI
os.environ["GOOGLE_API_KEY"] =
GEMINI_MODEL_NAME = 'gemini-2.5-flash'
CUSTOM_MODEL_NAME = 'lc_' + GEMINI_MODEL_NAME
@morganmcg1
morganmcg1 / wandbot_eval_with_intercom_and_refernce_answers.json
Created May 19, 2025 20:50
wandbot_with_intercom_and_reference_answers
[
{
"index": 0,
"question": "Hey I have a question about using wandb with fastapi in a prod environment. is it recommended to initialize wandb within a specific route function, ie\n\n `@app.route('/')\n def my_function():\n wandb.init(...)\n `\n or should i initialize beforehand:\n\n `wandb.init(...)\n @app.route('/')\n def my_function():\n ...`\n\n I'm getting a long list of log items in the console and many of them are empty.",
"generated_answer": "We recommend initializing wandb as early as possible in your FastAPI application, preferably before defining your route functions. This helps capture any console output, including error messages, which can be useful for debugging.\nTo set up wandb in your FastAPI app:\nInitialize wandb at the start of your main application file:\nimport wandb\nwandb.init(project=\"your_project_name\")\n\napp = FastAPI()\n\[email protected]('/')\ndef my_function():\n # Your route logic here\n\nTo address the empty log items, try inc
@morganmcg1
morganmcg1 / wandbot_eval_intercom_answers.py
Created May 19, 2025 20:00
wandbot evals including Intercom bot answers - May 19th 2025
[
{
"index": 0,
"question": """Hey I have a question about using wandb with fastapi in a prod environment. is it recommended to initialize wandb within a specific route function, ie
`@app.route('/')
def my_function():
wandb.init(...)
`
or should i initialize beforehand:
@morganmcg1
morganmcg1 / prompts_cheatsheet.md
Created April 22, 2025 11:22
prompts_cheatsheet from o3 after passing a bunch of jailbroken prompts

Below is a practical β€œcheat‑sheet” distilled from the many production prompts you just saw (Cursor, v0, Devin, Manus, Replit, Lovable, etc.). Follow these guidelines when designing prompts for your own LLM‑powered apps and you’ll get more reliable, safer and higher‑quality results.

────────────────────────────────────────────────────────

  1. Nail the Three Core Elements
    ──────────────────────────────────────────────────────── A good prompt almost always contains ❢ role, ❷ objective, ❸ constraints / format.

β€’Β Role
β€œYou are a senior Rust compiler engineer…”
Sets the persona, vocabulary and depth.

@morganmcg1
morganmcg1 / llms-full-plan.txt
Created March 9, 2025 11:18
llms-full plan
ources.
Browsing completed
Browsing completed
I am browsing the URL: https://llmstxt.org/
Browsing completed
I am browsing the URL: https://weave-docs.wandb.ai/
Now that I have a good understanding of both the llms.txt concept and Weave's documentation, I'll create an appropriate llms.txt file for Weave. I'll focus on making it useful for LLMs to understand and work with Weave's capabilities.
Edited the contents of a file
I've created a comprehensive llms.txt file for Weave that follows the standard format and provides structured information about the framework. Here's what I included:
Title (H1): Clear identification of the project
@morganmcg1
morganmcg1 / llms.txt
Created March 9, 2025 10:44
Weave llms.txt
# Weights & Biases (W&B) Weave
> Weave is a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications. It provides comprehensive tools for tracing LLM calls, monitoring application behavior, systematic prompt engineering, evaluation, and deployment of guardrails in production.
Weave is designed to support the entire lifecycle of LLM application development, from initial experimentation to production deployment. Key capabilities include:
- Tracing and monitoring of LLM interactions and application logic
- Systematic iteration on prompts, datasets, and models
- Experimentation through an LLM Playground
- Comprehensive evaluation tools with custom and pre-built scorers
from weave.scorers import WeaveToxicityScorerV1
toxicity_scorer = WeaveToxicityScorerV1()
@weave.op
def call_llm(prompt: str) -> str:
"""Generate text using an LLM."""
# Your LLM generation logic here
return prompt.upper()
@morganmcg1
morganmcg1 / marimo_question.txt
Created February 11, 2025 15:45
marimo_question.txt
This file has been truncated, but you can view the full file.
<file_tree>
/Users/morganmcguire/ML/marimo
β”œβ”€β”€ .github
β”œβ”€β”€ β”œβ”€β”€ ISSUE_TEMPLATE
β”œβ”€β”€ └── workflows
β”œβ”€β”€ configs
β”œβ”€β”€ dagger
β”œβ”€β”€ └── src
β”œβ”€β”€ └── └── main
β”œβ”€β”€ development_docs