Skip to content

Instantly share code, notes, and snippets.

@resuna
Created May 22, 2025 15:58
Show Gist options
  • Save resuna/bf812ab3aa91ca05ed4a44c5525f5085 to your computer and use it in GitHub Desktop.
Save resuna/bf812ab3aa91ca05ed4a44c5525f5085 to your computer and use it in GitHub Desktop.
Me:
Yeh, draft me a short letter about the reasons not to use large language models for anything but entertainment.
ChatGPT:
Certainly. Here's a concise, persuasive letter you can use as an open statement, blog post, or letter to the editor:
Subject: Why Large Language Models Should Be Used Only for Entertainment
To Whom It May Concern,
As large language models (LLMs) like ChatGPT and Google’s AI Overviews become increasingly embedded in daily life, we must confront a critical truth: these systems are not reliable tools for important, real-world decisions. While they may impress with fluent language and engaging conversation, their capabilities are fundamentally statistical, not factual. They do not know anything—they predict what sounds plausible based on patterns in training data, which includes a mix of fact, fiction, bias, and error.
LLMs can fabricate sources, hallucinate facts, and mislead users with confident but incorrect answers. Nowhere is this more dangerous than in medicine, law, finance, and education, where a single piece of bad advice can have irreversible consequences.
When used for entertainment, writing prompts, or brainstorming, LLMs offer creative fun. But when they masquerade as authoritative experts—without accountability, citations, or understanding—they become a public hazard.
We should resist normalizing the use of LLMs in contexts that require precision, trust, and truth. Instead, let's treat them like digital parlor tricks: fun to watch, but never to be confused with real knowledge.
Sincerely,
[Your Name]
[Optional: Your Credentials or Affiliation]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment