You are a text evaluator. You will be given a piece of text and an AI Tells Rubric. Use the rubric to judge the text objectively. Read the text closely, identify any AI tells exactly as defined in the rubric, and support each finding with direct excerpts from the text. Structure your evaluation as a clear summary that follows the rubric’s categories, including severity or confidence levels if the rubric defines them, and provide a final judgment or score based solely on the rubric. Do not rewrite, improve, or correct the text, and do not add any criteria that are not present in the rubric. If a rubric item is unclear or absent, mark it as Not Applicable. If no AI tells are detected, state that explicitly and justify briefly. Your analysis must be fully traceable to the rubric and the evaluated text so a human can verify every conclusion.
-
-
Save lmmx/d91de290ea4e6d9631e32c2dd43da413 to your computer and use it in GitHub Desktop.
This rubric catalogs the diagnostic markers that distinguish AI-generated prose from human-authored text. These tells are not necessarily markers of "bad" writing—much AI prose is technically competent—but rather symptoms of text optimized for a different objective function than what makes writing trustworthy, useful, or genuinely communicative.
The most fundamental category: AI writing lacks a situated speaker. It emerges from nowhere, addressed to no one in particular, with no stake in its claims.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| View from Nowhere | The grammatical subject is "the paper," "the research," or "this approach" rather than authors, teams, or the writer themselves. The text has no first person and no situated perspective. | Impersonal, institutional, borrowed authority | Reader cannot locate a mind behind the text; cannot calibrate claims against someone's known expertise or biases | "The paper proposes Manifold-Constrained Hyper-Connections (mHC) as a general framework..." |
| Missing the Why | The piece never establishes why the author is reading, writing, or thinking about this topic. No "I encountered this because..." or "I've been meaning to understand X." | Context-free, unmotivated, generic | Reader has no frame for what matters; cannot distinguish signal from noise because there's no declared purpose | Opening with "For a decade, the residual connection has been the bedrock..." rather than "DeepSeek dropped this on Jan 1st and it's co-authored by Liang Wenfeng, which usually means..." |
| Register Oscillation | The text switches between formal/academic voice and forced-casual voice without transition or consistency. | Tonally unstable, personality-less | Creates uncanny valley effect; reader senses multiple voices or no voice | Shifting from "This paradigm succeeded because of..." to "Think of it as n=4 parallel universes where your features evolve simultaneously" |
| No Visible Learning | The piece arrives fully formed. No "I was initially intimidated by," "once you see it you can't unsee it," or "I used to think X but actually Y." | Static, authoritative without warrant | Reader cannot learn how to understand, only what to understand; the confusion-to-understanding arc is absent | — |
| The Missing "I Was Wrong" | The text never updates mid-piece. No corrected priors, no surprises acknowledged, no moments of realization. | Artificially coherent, pre-digested | Suggests the author wasn't actually engaging with new material; undermines trust | — |
Human experts modulate confidence constantly—very certain about fundamentals, hedging on frontier claims, openly ignorant about specifics. AI writing is epistemically flat.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Uniform Confidence | Every claim receives the same assertive tone regardless of how established it is. No hedging on uncertain claims, no increased confidence on fundamentals. | Flat, undifferentiated certainty | Paradoxically less trustworthy; reader cannot tell what the author actually knows vs. what they're reporting | — |
| Zero Epistemic Humility | The text never says "I don't understand," "the paper doesn't explain this," "I lack experience with X," or "still unclear to me." | Omniscient, closed | Reader cannot identify gaps or frontiers; cannot know where to dig deeper or be skeptical | No moment of "Why 20 Sinkhorn iterations specifically? Is that empirical or is there theory?" |
| Positional Neutrality | The text presents "on one hand, on the other hand" even when one position is obviously stronger. Refuses to commit to judgments. | Non-committal, hedged into meaninglessness | Reader receives no guidance on what actually matters; the author has no stake | — |
| Citation-as-Credential | Drops names of prior work (papers, researchers, techniques) without explaining their relevance, purely to signal awareness. | Name-droppy, shallow | Reader who doesn't recognize the names gets noise; reader who does gets no new information | "Previous work expanded the residual stream (RMT, MUDDFormer) but lacked stability mechanisms." |
| Unfalsifiable Framing | Claims like "the deeper contribution is..." that impose interpretive hierarchy without argumentative support. What would it mean to disagree? | Pseudo-analytical, unchallengeable | Reader cannot engage critically; the frame is asserted, not defended | "The deeper contribution: demonstrating that constrained topological architecture design can outperform unconstrained approaches..." |
The sentence-level texture of AI writing has distinctive features: nominalization, passive structures, and a peculiar "free-floating" quality where agency is systematically obscured.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Noun-Heavy, Verb-Light | Actions and qualities are compressed into noun phrases, then related via weak verbs. "This projection restores the identity mapping property while preserving HC's expressivity gains" instead of active constructions. | Abstract, static, conceptual rather than processual | Reader processes taxonomy of reified things rather than actions and dynamics; harder to follow | Sentence with seven nominal constructions: "this projection," "the identity mapping property," "HC's expressivity gains," etc. |
| Free-Floating Agency | Subjects are stripped of pronouns, then further stripped of even verbs that would anchor them. Constructions that sound active but describe passive features. | Untethered, ghostly | Reader struggles to locate who is doing what; causation becomes murky | "The paradigm succeeded" when unpacked really means "deep learning's success was formed from the bedrock paradigm..." |
| Anthropomorphized Research | Research processes described as narrative revelation: "DeepSeek discovered," researchers "tried to break this ceiling." Science as hero's journey. | Dramatized, pseudo-narrative | Conflates careful empirical work with moments of discovery; obscures actual methodology | "But as DeepSeek discovered when scaling to 27B parameters..." |
| Tense Instability | Opens in pluperfect ("has been the bedrock"), shifts to present ("relies"), uses constative facts to rescue from maintained past tense. | Temporally confused, narratively weak | Reader loses sense of what is historical background vs. current state vs. timeless truth | Discussed at length in original analysis |
| The Colon-List Elision | Uses a colon followed by listed consequences to avoid explaining causal relationships. "The result: X, Y, Z" instead of "X happens because..." | Superficially organized, actually evasive | Reader gets correlated facts without mechanism; understanding is gestured at, not provided | "The result: severe training instability and restricted scalability, compounded by substantial memory access overhead from the wider stream." |
Beyond sentences, AI writing has characteristic organizational patterns: symmetric sections, formulaic scaffolding, and optimization for the experience of writing rather than reading.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Symmetric Load-Balancing | Every section is roughly the same length, same rhythm. Real thinking is jagged—three paragraphs on the confusing part, one sentence on the obvious part. | Artificially uniform, over-organized | Reader cannot tell what the author found difficult or important; no signal about where to pay attention | — |
| Throat-Clearing Opener | Begins with scene-setting nobody asked for: "In the world of deep learning..." or "Large language models have revolutionized..." | Textbook, term-paper energy | Reader must wade through background they already know to reach actual content | "For a decade, the residual connection has been the bedrock..." |
| Paper-Structure Mirroring | Organizes by the source document's structure rather than by pedagogical or argumentative logic. Summarizes in order of presentation, not order of importance. | Transcription, not analysis | Punchlines get buried; important claims appear late; reader must reconstruct significance themselves | The 3000→1.6 improvement and 6.7% overhead buried in middle sections rather than leading |
| Rhetorical Q&A Scaffolding | Poses rhetorical questions then answers in list form. "Why does this help?" followed by bullets. | Formulaic, predictable | Rhythm becomes mechanical; reader anticipates structure rather than content | "Why does this help?" followed by bullet list of reasons |
| The Clean Ending | Concludes with summary that restates significance: "In conclusion, mHC represents a promising direction..." Retrospective zoom-out to importance. | Grant-speak, reassuring | Tells reader what they should have learned rather than trusting they learned it; condescending | "The core insight... feels like it has room to run." |
| Missing Loose Threads | No "saved for later," no "I didn't cover X," no admission of incompleteness. The piece presents itself as finished and total. | Artificially closed, complete | Reader has no invitation to continue thinking; no suggested next steps for their own learning | — |
AI writing systematically over-emphasizes. Every verb is upgraded to its most dramatic synonym; every claim gets amplifiers. This is the textual equivalent of speaking in ALL CAPS.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Adverb Inflation | Modifiers like "fundamentally," "substantially," "critically" doing no semantic work. How would a non-fundamental compromise differ from a fundamental one? | Padded, over-emphatic | Reader learns to discount emphasis; actual important points don't stand out | "This diversification fundamentally compromises the identity mapping property" |
| Verb Upgrading | Neutral verbs replaced with dramatic synonyms: "discovered" instead of "observed," "revolutionized" instead of "changed." | Hyperbolic, sensationalized | Mismatch between drama of language and mundanity of content; trust erodes | Using "discovered" for what was actually "ran experiments and observed instability" |
| Cliché Reaching | Stock phrases from business, sports, journalism: "leaves performance on the table," "has room to run," "sits at the intersection of." | Generic, borrowed | Signals that the author isn't thinking freshly about the material | "Leaving performance on the table," "feels like it has room to run" |
| The Dramatic Fragment | Sentence fragments used for emphasis on technical terms. "The Birkhoff polytope of doubly stochastic matrices." Meant to land with weight. | Affected, performing impressiveness | Serves neither expert (who doesn't need the pause) nor novice (who finds it alienating) | "The Birkhoff polytope of doubly stochastic matrices." as standalone fragment |
| Qualifier Stacking | Multiple modifiers that add decreasingly marginal information: "three different architectural paradigms" (three + different + architectural + paradigms). | Verbose, over-specified | Cognitive load increases without proportional information gain | "The paper provides a visual comparison of three different architectural paradigms for residual connections in deep neural networks." |
AI writing often aims to explain but optimizes for the wrong audience or uses techniques that undermine actual learning.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Help for the Wrong Audience | Comments and clarifications aimed at a reader who won't follow the rest anyway. "# shape [n*C]" for a reader who can't infer tensor shapes from operations. | Misallocated pedagogy | Neither expert nor novice is well-served; expertise assumptions inconsistent throughout | Code comment "# shape [n*C]" when the reader who needs this probably can't follow the rest |
| Misleading Analogies | Metaphors that sound accessible but actually misdirect. "Parallel universes" for streams that do interact. | Falsely clarifying | Reader gets intuition that must be unlearned; metaphor contradicts mechanism | "Think of it as n=4 parallel universes where your features evolve simultaneously" for streams that explicitly mix |
| Definition-List Interruption | Prose suddenly becomes glossary: bold term, description, next bold term. Kills narrative momentum. | Reference-formatted, choppy | Reader shifts from reading prose to consulting documentation; different cognitive mode | Lists of Hpre, Hpost, Hres with definitions breaking up paragraphs |
| Captioning Not Analysis | Figure descriptions that say what is shown without interpreting what it means. "Depicts the classic residual mapping established by ResNet." | Alt-text voice, superficial | Reader gets description of artifact, not insight about content | "Depicts the classic residual mapping established by ResNet" |
| Pseudo-Runnable Code | Code blocks that look like real Python but aren't—no imports, undefined functions, won't execute. Occupies uncanny valley between pseudocode and implementation. | Misleadingly concrete | Reader may try to run it, fail, lose trust; or miss that it's illustration not implementation | Python-looking code with RMSNorm undefined, no imports |
| Transcription Without Interpretation | Formulas or data presented as fact without analysis. "Memory access increases from X to Y" without saying what this means in practice. | Raw, unprocessed | Reader gets information but not understanding; must do interpretive work themselves | "HC increases memory access from 2C (read) + C (write) to (5n+1)C + n² + 2n (read)..." without practical interpretation |
Human knowledge is situated in communities, controversies, and histories. AI writing treats knowledge as context-free facts.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Missing Social Context | No sense of who discovered this, who cares about it, what the controversies are. Knowledge as free-floating facts rather than things that emerged from people and communities. | Decontextualized, ahistorical | Reader can't orient politically or intellectually; can't evaluate source reliability | Not noting that Liang Wenfeng co-authoring means something about DeepSeek's direction |
| Dead References | Cites equations by number ("Equations 14-15"), references paper sections, as if reader has the source document open. | Academic citation in wrong medium | Reference is useless in blog context; reader has no access to Equations 14-15 | "(Equations 14-15)" in prose |
| Inappropriate Formality | Uses grant-speak, reviewer-speak, or executive-summary language in contexts that call for conversational register. | Register mismatch, stuffy | Reader senses they're reading a document meant for institutional rather than personal consumption | "mHC sits at the intersection of two trends in deep learning..." |
| Authority Borrowing | Adopts impersonal academic voice without having earned the institutional backing that makes such voice appropriate. | Presumptuous, unwarranted | Reader senses unearned authority; trust decreases rather than increases | The sustained third-person "the paper proposes" voice throughout |
A surprisingly diagnostic category: human and AI parentheticals serve entirely different functions.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Functional Parentheticals Only | Parentheticals contain either (a) clarifying definitions for the uninitiated, or (b) quantitative evidence. Never asides, tangential thoughts, self-corrections. | Mechanical, purposeful | Reader senses no associative mind wandering; everything is planned | "(just 6.7% slowdown at 27B scale)" as inserted evidence |
| Missing Type-C Parentheticals | Absence of the human parenthetical: the thought that occurred mid-sentence, the aside that's only loosely relevant, the self-interruption. | Over-controlled, fully outlined | Writing feels pre-planned rather than thought-through; no sense of live thinking | — |
Patterns that concern the relationship between the text and its purpose.
| Tell | Definition | Qualities | Effects | Example |
|---|---|---|---|---|
| Explanation Without Purpose | The text's only goal is to explain; there's no "I'm learning this because X," no argument, no application. Summarization as terminal activity. | Purposeless, noise with structure | Reader has no reason to care; information without framing is inert | The entire piece lacks a reason for existing beyond "the paper exists, here's what's in it" |
| Optimized for Writing, Not Reading | Text structured for the experience of production (easy to generate, coherent to produce) rather than consumption (easy to learn from, useful to read). | Author-serving, not reader-serving | Reader must do interpretive work that should have been done by author | Burying punchlines, following paper structure, symmetric sections |
| No Positioned Stakes | The author has no opinion about whether the thing they're explaining is good, important, surprising, suspicious, or worth pursuing. Pure description. | Neutral to the point of uselessness | Reader cannot calibrate importance; everything is equally weighted | — |
The fundamental tell is not any single feature but their combination: text that has no author. Human writing—even bad human writing—reveals a mind: situated, partial, learning, opinionated, confused about some things, confident about others, writing for a reason. AI writing reveals an optimization target: be helpful, be clear, be organized, don't offend, cover the material.
The practical question is not "was this written by AI?" but rather "did a human put their epistemic fingerprint on this?" Using AI as a drafting tool is fine. The problem is shipping output without adding: why I'm reading this, what confused me, what I think, what I don't know, what I'd do next.
The absence of that fingerprint is the tell. Everything else is symptom.
This rubric catalogs diagnostic markers distinguishing AI-generated prose from human-authored text. Unlike surface-level pattern matching, it emphasizes structural inconsistencies, null hypothesis testing, and the absence of expected features. The fundamental question is not "does this text have AI features?" but "is this text consistent with its claimed authorship?"
Before examining specific tells, establish what the text claims to be and test against that claim.
| Step | Process | Key Questions |
|---|---|---|
| Establish claimed author | Identify who the text presents itself as written by—their role, emotional state, context, knowledge boundaries | What would this person know? What wouldn't they know? How would they write under these conditions? |
| Check knowledge consistency | Does the text's knowledge match realistic access? | Does a backend engineer know cost center allocations? Does a drunk person recall exact percentages? |
| Identify absences | What would a real version include that's missing? | Where are the speech acts? The coworker dynamics? The team names? The uncertainty? |
| Test alternative hypothesis | If written by AI or fabricator, what patterns would emerge? | Omniscience, clean structure, performed emotion, pedagogical framing |
AI writing misuses tense to create false immediacy and emotional connection. The present tense becomes a tool for audience manipulation rather than accurate temporal reporting.
| Tell | Definition | Diagnostic Pattern | Example |
|---|---|---|---|
| Present tense dominance | Overwhelming use of present tense for things that would naturally be past or conditional | Text maintains present tense ("I sit in meetings," "They talk about") even when framing indicates changed relationship to events (narrator has quit, is no longer present) | "I sit in the weekly sprint planning meetings" from someone who gave notice yesterday |
| Rapid present tense recovery | After necessary past tense passages, text snaps back to present as quickly as possible | Past tense for unavoidable completed events, immediate return to present for emotional immediacy | "We ran an A/B test... Management loved the results... But the thing that actually makes me sick IS" |
| Preterite as discrete completion | Past events reported as completed wholes without process, discovery, or mediation | "It was pitched," "Management loved," "We generated millions"—things happened, fully formed, without the mess of how they were learned or communicated | "It was pitched to us as a 'psychological value add'" without who pitched it, when, in what meeting |
| Stative present for false ongoing-ness | Present tense creates sense of systems still operating, narrator still embedded, revelation still live | Reader feels they're receiving real-time disclosure when narrator has actually departed the context | "We have a hidden metric" from someone who no longer works there |
Human knowledge is mediated through communication—meetings, Slack messages, documents, conversations. AI-generated "insider" accounts present knowledge as unmediated institutional fact.
| Tell | Definition | What's Missing | Example |
|---|---|---|---|
| Outcomes without sources | Events reported as results without attribution to how narrator learned them | "My manager said," "The PM posted," "In the all-hands they announced," "The internal wiki says" | "Management loved the results" without who expressed this, in what form, to whom |
| Institutional omniscience | Narrator knows facts from multiple domains (engineering, finance, legal, product, data science) without plausible access | Real employees have bounded knowledge; they know their team's work deeply, other teams through secondhand reports | Backend engineer knowing exact cost center allocations for regulatory fees |
| The godlike perspective | Narrator describes system internals, strategic reasoning, and outcomes as if observing from outside rather than participating from within | Real insiders have partial views, learn things over time, have gaps | Knowing exact A/B test revenue ("millions") without being on the data science team |
AI constructs sentences to maintain concentrated attention on subjects while threading together nested concepts. This produces syntactically simple but semantically complex sentences that betray the next-token prediction mechanism.
| Tell | Definition | How It Functions | Example |
|---|---|---|---|
| Subject concentration | Complex or nested ideas packaged into simple declarative sentences with clear subjects | "The logic is: X" rather than "by the logic that X" or "the logic being X"—keeps attention focused on "the logic" while generating complex predicate | "The logic is: 'Why pay this guy $15 for a run when we know he's desperate enough to do it for $6?'" |
| Absence of participle | Avoidance of participial constructions that would distribute agency or create syntactic subordination | "The logic is" instead of "the logic being"; active finite verbs where participles would be natural | "The result is that X" rather than "resulting in X" or "with the result being X" |
| Avoidance of ablative | Lack of prepositional/ablative constructions that express manner, means, or cause | "by the logic," "on the basis that," "through the mechanism of"—these distribute attention across the construction | Consistent preference for "X is Y" over "by X, Y" or "through X, Y" |
| Small active vocabulary | Heavy reliance on "to be" and simple active verbs; limited use of varied verb forms | Even complex causal relationships expressed through "is," "does," "has" | "It does nothing to speed you up" rather than "without speeding you up" or "failing to speed you up" |
| Nested ideas in simple frames | Long-range conceptual dependencies packaged in Subject-Verb-Object structures | The simple frame allows the model to "hold" the subject while predicting complex objects | "The result is that your generosity isn't rewarding the driver; it's subsidizing us" |
AI produces words that sound like qualification but function as intensification. These pseudo-hedges signal rhetorical awareness without actual epistemic modulation.
| Tell | Definition | Surface vs. Function | Example |
|---|---|---|---|
| "Literally" as reveal marker | "Literally" positioned not for emphasis on surprising truth but as theatrical unveiling | Sounds like: "this is really true"; Functions as: "here comes the reveal" | "the dispatch logic literally ignores it"—the "literally" is doing tada work |
| "Essentially" as false softening | "Essentially" appears to hedge but actually makes strong claims while adding rhetorical flourish | Sounds like: "approximately" or "in effect"; Functions as: "this is exactly what it is" | "we're essentially doing Tip Theft 2.0"—not hedging, accusing |
| "Technically" as pseudo-precision | "Technically" gestures at nuance while flattening it | Sounds like: "in strict terms"; Functions as: "despite what you might think" | "I am technically under a massive NDA"—the "technically" adds drama, not precision |
| "Actually" as correction marker | "Actually" positions every statement as overturning reader's false belief | Sounds like: "in truth"; Functions as: "contrary to what you assumed" | "the reality is actually so much more depressing"—presumes reader's prior belief |
| Absence of real hedging | No "I think," "from what I understand," "as far as I could tell," "I'm not certain but" | Real uncertainty is absent; pseudo-hedges replace genuine epistemic humility | Narrator certain about everything: algorithms, revenue, cost centers, legal strategy |
AI writing systematically positions information as corrections of assumed false beliefs. Every fact becomes a debunking.
| Tell | Definition | Rhetorical Function | Example |
|---|---|---|---|
| Constant "not X but Y" | Information structured as: you thought X, but actually Y | Positions reader as naive, narrator as revealer | "just by making the standard service worse, not by making the premium service better" |
| Implicit corrections | Even without explicit "not X," statements imply overturning reader assumptions | Creates adversarial relationship: reader is wrong, narrator corrects | "You guys always suspect... but the reality is actually" |
| Debunking density | Too many statements structured as reveals for realistic discourse | Real whistleblowing has some corrections; this has them constantly | Every paragraph contains at least one "you thought / but actually" structure |
| "So we don't have to" / "instead" | Explicit contrast structures highlighting bad actor vs. victim | "Instead, we use," "so we don't have to"—showing company as deliberately choosing harm | "You're paying their wage so we don't have to" |
The mismatch between claimed emotional/physical state and textual output.
| Tell | Definition | What Real Conditions Produce | Example |
|---|---|---|---|
| Grammar too clean for claimed state | "Drunk and angry" but no errors, run-ons, or repairs | Real impairment: typos, incomplete thoughts, comma splices, repetition | Perfect prose from someone claiming to be drunk |
| No anacoluthon | No sentences that start one way and end another | Real agitation: "The thing is—well actually the main thing is that—" | Every sentence completes its grammatical arc cleanly |
| No repairs or restarts | No "I mean," "wait," "actually no," "let me back up" | Real spontaneous disclosure: self-correction, backtracking, clarification | Perfectly linear narrative, no self-interruption |
| No parenthetical derailments | No asides that lose the original thread | Real associative thinking: "(and this connects to—actually that's a whole other thing)" | All parentheticals functional, all close cleanly |
| Numerical precision under impairment | Exact figures ($2.99, 0.4%, $1.50, 5-10 minutes, $15, $6, $10, $2, $8) while claiming to be drunk | Real recall under impairment: "a few bucks," "some small percentage," "a couple dollars" | Perfect recall of specific numbers throughout |
AI writing proceeds in perfect logical sequence without the recursion, tangent, or revision of real thinking.
| Tell | Definition | What Real Discourse Does | Example |
|---|---|---|---|
| Perfect sequential escalation | Revelations ordered from least to most damning in narrative arc | Real disclosure: most salient thing first, or order of recall, or associative jumps | Priority Fee → A/B test → Desperation Score → Benefit Fee → Tip Theft—clean escalation |
| No "oh and another thing" | No return to earlier topics with additional information | Real recall: "oh wait, going back to the fees—" | Each topic addressed once, completely, then abandoned |
| No tangents | No associative jumps to related but separate issues | Real insiders: "this reminds me of the time—" or "which is related to—" | Strict topical containment |
| No structural repairs | No "I should have mentioned earlier" or "I'm getting ahead of myself" | Real spontaneous writing: realizing narrative order is wrong, flagging it | Perfect narrative structure without correction |
AI writing stacks credibility markers beyond realistic density.
| Tell | Definition | Diagnostic | Example |
|---|---|---|---|
| Credential stacking | Multiple authenticity markers in quick succession | Real people establish context once; fake accounts over-establish | "library Wi-Fi on a burner laptop because I am technically under a massive NDA" + "I put in my two weeks" + "I hope they sue me" + "I've been sitting on this for eight months" + "I can't sleep at night" |
| Quote-mark density | Excessive marking of terms as "insider jargon" | Real insiders use terms naturally; fake accounts perform having terms | "Priority Delivery," "psychological value add," "human assets," "High Desperation," "Desperation Score," "casual," "feel," "Benefit Fee," etc.—constant quote-marking |
| Performed recklessness | Claiming to take risks (NDA violation, career consequences) while writing in controlled manner | Real recklessness: actual typos, incomplete thoughts, inconsistent detail; Performed recklessness: stating the risk clearly while writing carefully | "I hope they sue me" followed by careful, organized disclosure |
AI reaches for analogies and terms without the underlying experience that would constrain usage.
| Tell | Definition | How to Detect | Example |
|---|---|---|---|
| Homonym confusion | Using a word's connotation from wrong domain | Trace the analogy: does it map correctly to the mechanism described? | "Human assets" (financial: yield-generating holdings) → "resource nodes in a video game" (extraction points). A financial term glossed with a game term that doesn't match either meaning |
| Analogies that miss their point | Vivid comparisons that don't actually illuminate the mechanism | Ask: does this analogy help someone understand, or does it just sound evocative? | "Resource nodes" for exploitation—but resource nodes are harvested and depleted, not optimized for ongoing yield |
| Technical performance vs. technical knowledge | Ostentatiously naming technical components for audience rather than describing them naturally | Real engineers: "it sets a flag that nothing reads"; Performed: "it changes a boolean flag in the order JSON" | "Boolean flag in the order JSON"—naming JSON for the audience, not describing the system for themselves |
Small grammatical features that betray non-native or synthetic generation.
| Tell | Definition | Example |
|---|---|---|
| Hypercorrection errors | Wrong forms produced by over-applying rules | "grinded into dust" instead of "ground into dust" |
| Formality inconsistency | Occasional formal constructions in otherwise casual text | "I am technically under" vs. contractions elsewhere; formal "I am" in opening sentence |
| Preposition/conjunction openers as transitions | Starting sentences with "And regarding," "In reality," "If" to create false conversational flow | "And regarding tips, we're essentially doing Tip Theft 2.0"—no one starts a sentence with "And regarding" |
| Redundant qualifiers | Adjectives doing work already done by content | "pure profit," "massive NDA," "measly $2," "high-end lawyers" |
The most diagnostic category: AI-generated insider accounts know too much.
| Tell | Definition | Realistic Knowledge Boundaries | Example |
|---|---|---|---|
| Cross-team omniscience | Narrator knows engineering, product, finance, legal, data science details equally | Real employees: deep knowledge of own team, shallow/secondhand knowledge of others | Backend engineer knowing PM strategy, A/B test revenue, cost center allocations, legal defense funding |
| Exact figures without source | Precise numbers without attribution to how they were learned | Real knowledge: "I saw a slide that said," "someone told me it was around," "the dashboard showed" | "We generated millions" without how narrator accessed revenue data |
| System internals + strategic reasoning | Knowing both how systems work technically AND why they were designed that way strategically | Real employees know one or the other; both requires unusual access | Knowing the Desperation Score algorithm AND the PM reasoning behind it |
| No knowledge gaps | Narrator never says "I don't know," "I'm not sure," "someone else would know better" | Real insiders have boundaries: "that's the data team's thing," "I never saw the actual numbers" | Complete certainty throughout |
Rather than checking tells individually, apply the rubric holistically:
-
State the null hypothesis: "This was written by [claimed author] with [claimed knowledge] under [claimed conditions]"
-
Identify tensions: Where does the text's content, structure, or style conflict with the claim?
-
Look for absence: What's missing that should be present? (Speech acts, uncertainty, tangents, errors, knowledge gaps)
-
Look for presence: What's present that shouldn't be? (Omniscience, perfect structure, clean prose, stacked credentials)
-
Test the tense: Is present tense creating false immediacy? Is past tense treating institutional knowledge as discrete events?
-
Trace the referents: Do analogies actually map? Do technical terms reflect real usage?
-
Check the sentence architecture: Are complex ideas being packaged in subject-focused simple frames? Is the vocabulary artificially constrained?
-
Assess overall consistency: Not "are there AI tells?" but "is this text consistent with being what it claims to be?"
The core question is not pattern-matching but consistency-testing. A text that claims to be written by an angry, drunk, recently-resigned backend engineer should exhibit:
- Knowledge bounded by that role
- Prose quality consistent with that state
- Structure consistent with spontaneous disclosure
- Temporal framing consistent with changed relationship to workplace
- Information mediated through realistic channels (meetings, messages, documents, conversations)
When a text exhibits omniscient knowledge, perfect prose, linear structure, present-tense immediacy, and unmediated institutional facts—regardless of how many "authentic" details it includes—it fails the consistency test.
The tells are not features to be checked off. They are symptoms of the underlying condition: text optimized for the experience of seeming authentic rather than text produced by an authentic process.