A practical, living list of best practices for mastering large language models. Field-tested by Tim Warner.
🔗 Public Gist: go.techtrainertim.com/laws
These "laws" are in no particular order and evolve as the tech and our workflows change.
-
🧠 Anything Unstated Gets Inferred
If you leave something out of your prompt, the AI will guess, and not always how you want. -
✂️ Sculpt Context, Don’t Pollute It
Feed the AI only what matters. Be surgical. Trim background noise, legacy docs, and side chatter. -
🧪 A/B Test Your AI Daily Drivers
Maintain at least two paid LLMs. Compare answers, cross-check facts, and swap when one stumbles. -
🛩️ Pilot’s Chair Rule
You drive. The AI is your copilot, not your boss. Never let the model decide the mission or mark its own homework. -
🔒 Protect Privacy Ruthlessly
Never enter personal or confidential info into public or free AIs. Know your chat storage, licensing, and usage stats. -
🎤 Lean Into Multimodal
Use voice, text, images—whatever lets you express your needs fastest and clearest. Don’t limit yourself to typing. -
🗺️ Prompt Procedurally, Think in Steps
Break problems down step by step. Guide the AI like you’d mentor a human: who, what, when, where, why, how. -
🕵️♂️ Watch for Amnesia and Hallucination
When the AI forgets or fabricates, call it out. Keep a backup LLM for fault tolerance and groundedness. -
🔄 Meta-Prompt and System-Prompt
Ask the AI to refine your own prompts, summarize, or focus its response. Use meta-layer instructions for precision. -
🏛️ Use Pillar Jumping
Leverage insights from one LLM session to nudge or challenge another. Good answers get sharper in dialogue.
Bonus:
- 📝 Never Treat AI as an Oracle
Every AI answer is a draft. Validate, edit, and own the final output.
Authored by Tim Warner
TechTrainerTim.com • go.techtrainertim.com/laws