Skip to content

Instantly share code, notes, and snippets.

@thehunmonkgroup
Created October 10, 2025 13:45
Show Gist options
  • Save thehunmonkgroup/cd531569bc5dc40d6f2e267a372b72d8 to your computer and use it in GitHub Desktop.
Save thehunmonkgroup/cd531569bc5dc40d6f2e267a372b72d8 to your computer and use it in GitHub Desktop.
Summary: **words That Make Language Models Perceive**

URL: https://arxiv.org/pdf/2510.02425

words That Make Language Models Perceive


QUESTION 1:

Can you give me a very clear explanation of the core assertions, implications, and mechanics elucidated in this paper?

ANSWER 1:

[THINKING]

Let me analyze this paper carefully to provide a clear explanation of its core assertions, implications, and mechanics.

Core Assertions:

  1. Main Hypothesis: Text-only LLMs can be steered toward modality-specific (vision/audio) representations at inference time through "sensory prompting" - using explicit cues like "imagine seeing..." or "imagine hearing..."

  2. Key Finding: When an LLM generates text conditioned on sensory prompts, its internal representations align more closely with specialist vision or audio encoders, even though the LLM was never trained on images or audio.

  3. Generative Representations: The paper introduces the concept of "generative representations" - representations that emerge through autoregressive generation rather than single forward passes. These show higher alignment with sensory encoders than standard embeddings.

Mechanics:

  1. Experimental Setup:

    • They use frozen text-only LLMs (Qwen3 family, 0.6B to 32B parameters)
    • They compare against frozen sensory encoders (DINOv2 for vision, BEATs for audio)
    • They use paired datasets (image-caption or audio-caption pairs)
  2. Key Technical Innovation - Generative Representations:

    • Instead of just taking embeddings from encoding a caption once, they:
      • Prepend a sensory prompt (e.g., "Imagine what it would look like to see {caption}")
      • Generate T tokens autoregressively
      • Average hidden states across all generated tokens and all layers
    • Mathematical formulation: z_g averages h_i^(ℓ) for all tokens i from t to t+T and all layers ℓ
  3. Evaluation Method:

    • They use "mutual k-nearest neighbors" (mutual-kNN) alignment
    • For each caption, find its k nearest neighbors in the LLM's representation space
    • Find its k nearest neighbors in the vision/audio encoder's space (using the paired image/audio)
    • Measure overlap: higher overlap indicates better alignment between LLM and sensory encoder representations
  4. Sensory Prompting Strategies:

    • Developed two primary prompting approaches
    • Visual prompt: "Imagine what it would look like to see {caption}"
    • Audio prompt: "Imagine what it would sound like to hear {caption}"
    • Included a baseline prompt without specific sensory cues

Preliminary Findings:

  • Generating text inherently increases representation alignment
  • Sensory-specific prompts show promising potential for cross-modal representation transfer

Modality-Specific Insights:

  • Visual cues enhance vision-related embeddings
  • Audio cues boost audio-related embedding performance
  • Provides a flexible, runtime steering mechanism for representation learning

Model Scaling Observations:

  • Larger models demonstrate superior cross-modal alignment
  • Improved differentiation between sensory representations
  • Reduced default visual bias in smaller model architectures

Language Representation Nuances:

  • Sensory word selection critically impacts alignment
  • Context-appropriate sensory language is essential
  • Arbitrary sensory terminology can degrade embedding quality

Practical Validation:

  • Visual prompting demonstrates tangible improvements in text-based visual question answering performance

Broader Theoretical Implications:

  • Challenges traditional symbol grounding limitations
  • Reveals language models' potential for implicit multimodal learning
  • Demonstrates runtime representation malleability
  • Suggests text-only models maintain inherent sensory interpretation capabilities

Core Mechanism:

  • Models probabilistically map potential sensory causes behind text
  • Sensory prompts activate specific latent interpretations
  • Hidden state geometries reflect cross-modal representational similarities
  • Representations dynamically adapt through contextual conditioning

[ANSWER]

Core Assertions, Implications, and Mechanics

Core Assertions

This paper makes three central claims:

  1. Sensory prompts can steer text-only LLMs toward modality-specific representations - Simply adding cues like "imagine seeing..." or "imagine hearing..." causes the model's internal representations to align more closely with specialist vision or audio encoders, despite never being trained on images or sounds.

  2. "Generative representations" reveal latent multimodal structure - When LLMs generate text autoregressively (rather than just encoding it once), they produce representations that align better with sensory encoders. The hidden states accumulated during generation capture richer, more modality-appropriate structure.

  3. Text-only LLMs are implicitly multimodal - These models learn perceptually grounded representations from text alone by encoding multimodal regularities present in language (e.g., "red" appears in contexts that covary with actual visual redness in the world).

Key Mechanics

1. Generative Representations

Instead of standard embeddings from a single forward pass, they:

  • Prepend a sensory prompt to a caption
  • Generate T tokens autoregressively (e.g., 128 tokens)
  • Average all hidden states across all generated tokens and all layers

This creates a representation z_g that incorporates the model's reasoning process, not just its initial encoding.

2. Evaluation via Mutual k-NN Alignment

To measure "representational similarity":

  • For each caption, find its k nearest neighbors in the LLM's embedding space
  • Find the k nearest neighbors of the corresponding image/audio in the sensory encoder's space
  • Calculate overlap: What fraction of neighbors are shared?
  • Higher overlap = LLM's caption-caption relationships mirror the encoder's sensory-sensory relationships

3. Sensory Prompting Templates

  • SEE: "Imagine what it would look like to see {caption}"
  • HEAR: "Imagine what it would sound like to hear {caption}"
  • No cue baseline: "Imagine: {caption}"

4. Controllable Steering

The sensory cue acts as a prior that conditions what the model elaborates on:

  • SEE → increases vision alignment, decreases audio alignment
  • HEAR → increases audio alignment, decreases vision alignment
  • This creates a double dissociation, proving causal control

Critical Results

  1. Generation > Static Embeddings: Simply generating (even without sensory cues) already improves alignment over single-pass embeddings

  2. Length Matters: Alignment increases from 32→256 tokens as the model has more space to elaborate modality-specific details

  3. Scale Matters:

    • Larger models (32B) show stronger sensory alignment
    • Better separation between SEE/HEAR representations
    • Smaller models show default visual bias even for audio
  4. Content Matters:

    • Ablating sensory words (e.g., replacing "red" with "colored") decreases alignment
    • But adding random visual words also decreases it
    • Scene-appropriate sensory language is necessary
  5. Practical Impact: Visual prompting improves text-based VQA accuracy (67.14 vs 64.78 on MME benchmark)

Implications

Theoretical

  • Challenges symbol grounding: Shows LLMs can learn perceptually grounded meaning from text statistics alone, without direct sensory experience
  • Active control of alignment: Cross-modal convergence isn't just a passive training outcome—it can be steered at inference time
  • Prompts as distributional priors: Sensory cues condition the model on specific latent interpretations of what sensory evidence "caused" the text

Practical

  • Text-only models as multimodal embedders: Can extract vision/audio-aligned representations without multimodal training
  • Inference-time control: Adds to toolkit alongside chain-of-thought and retrieval
  • Cross-modal applications: Enables text-only models to act as image/audio encoders in text space

Why This Works (Proposed Mechanism)

The authors hypothesize that LLMs implicitly maintain uncertainty over latent causes (visual, auditory, etc.) that could have generated observed text. Sensory prompts:

  1. Condition on a specific sensory interpretation of the context
  2. Steer generation toward elaborating modality-appropriate details
  3. Accumulate representations through autoregression that reflect this framing

This shifts the geometry of the representation space—caption-caption similarity relationships become more like image-image (or audio-audio) relationships in specialist encoders, as measured by mutual k-NN overlap.

The key insight: What a model represents isn't fixed at training time—it can be elicited through the right prompts.


QUESTION 2:

Can you explain the value of this in basic terms? Like you're talking to a CEO. So what? What's the bottom line here?

ANSWER 2:

[THINKING]

The user wants me to explain the value of this research in simple business terms, as if explaining to a CEO who doesn't care about technical details but wants to know the practical impact and bottom line.

Let me think about what actually matters here:

  1. The "So What": You can make text-only AI models behave like they understand images and sounds, just by prompting them differently - no retraining needed

  2. Business Value:

    • Cost savings (don't need expensive multimodal training)
    • Flexibility (one model, multiple modalities via prompts)
    • Speed to market (change behavior instantly vs months of retraining)
  3. Concrete Applications:

    • Better search/retrieval
    • Quality control for AI outputs
    • Cheaper deployments
  4. Competitive Advantage:

    • Inference-time control is cheaper than training-time solutions
    • More adaptable systems
    • Lower infrastructure costs

Let me frame this in CEO language - ROI, competitive advantage, cost reduction, time to market, etc.

[ANSWER]

The Bottom Line (CEO Version)

What They Did

They discovered you can make a text-only AI "think visually" or "think auditorially" just by changing a few words in your prompt—no expensive retraining required.

Why This Matters (The So What)

1. Massive Cost Savings

  • Training multimodal AI is expensive: You need millions of image-text or audio-text pairs, specialized GPUs, weeks/months of training time
  • This approach costs nothing: You use the text-only model you already have and just prompt it differently
  • Example: Instead of training a $10M+ vision-language model, you add "imagine seeing..." to your prompts

2. Instant Flexibility

  • Traditional approach: If you want your AI to handle images, audio, and text, you need 3 different models or 1 massive multimodal model
  • This approach: One text model becomes a "Swiss Army knife"—switch modes instantly by changing the prompt
  • Time to market: Deploy in hours instead of months

3. Better Performance on Existing Tasks

  • They showed a 3.6% accuracy improvement on visual reasoning tasks just by adding visual prompting
  • That's free performance gain on your existing infrastructure

Real-World Business Applications

Search & Retrieval

  • Your customer searches for "red sports car"
  • With visual prompting, your text-only search engine can rank results more like an image search would
  • Result: Better matching without storing/processing expensive image embeddings

Content Moderation

  • Feed the AI a text description of potentially problematic content
  • Visual prompting makes it reason more like it's actually "seeing" the scene
  • Result: Better judgment calls, fewer false positives/negatives

Cross-Modal Applications on a Budget

  • Generate image captions that actually match visual similarity
  • Create audio descriptions that capture acoustic properties
  • All from your existing text model

Quality Assurance for AI Outputs

  • When your image generator creates something from text, use visual prompting to check if the description actually matches
  • Works as a QA layer without needing vision models

The Competitive Advantage

For Product Teams

  • Faster iteration: Change behavior with prompt engineering vs waiting for model retraining
  • Lower technical debt: Fewer specialized models to maintain
  • Easier A/B testing: Try different sensory framings instantly

For Engineering Teams

  • Simpler infrastructure: Deploy one model instead of multiple specialized ones
  • Lower compute costs: Text-only models are cheaper to run than multimodal models
  • More control: Fine-tune behavior at inference time, not training time

For Business Strategy

  • Democratizes multimodal AI: Don't need Google/Meta-scale resources to get cross-modal capabilities
  • Platform flexibility: Adapt the same model to different sensory contexts based on use case
  • Future-proof: As text models get better, this technique gets better automatically

The "Magic" Explained Simply

Think of it like this:

Old way: Train an AI chef to cook Italian food. Want French food? Train a new chef. Want fusion? Train a third chef.

This way: Train one chef who learned about all cuisines from cookbooks. Want Italian? Say "imagine you're in Rome." Want French? Say "imagine you're in Paris." Same chef, different mindset, triggered by simple context.

The AI already learned about images and sounds indirectly through text (people describe what they see/hear). This research shows you can activate that latent knowledge just by asking the right way.

Investment Decision Framework

If you're deciding whether to care about this:

Invest attention if:

  • You're using text-only LLMs and need cross-modal capabilities
  • You're spending money on multimodal model training
  • You need flexible AI that adapts to different sensory contexts
  • You want better search/retrieval/matching without infrastructure overhaul

Lower priority if:

  • You already have well-performing multimodal models in production
  • Your use cases are purely text-based with no sensory component
  • You need absolute best-in-class vision/audio performance (specialized models still win)

The One-Sentence Takeaway

You can make your existing text AI understand images and sounds well enough for many applications just by prompting it differently—no retraining, no new models, available today.

That's money left on the table if you're not exploring this for your text-only deployments.


QUESTION 3:

Can you give me an analogy or metaphor that will help explain this to a broad audience?

ANSWER 3:

[THINKING]

The user wants an analogy or metaphor to explain this research to a broad, non-technical audience. Let me think about what makes this research special and find everyday experiences that capture the essence.

Key concepts to capture:

  1. Text-only training but latent multimodal knowledge
  2. Prompts activating different "modes" of thinking
  3. The same model behaving differently based on context
  4. Generative process revealing hidden structure
  5. Language encoding sensory experience indirectly

Let me brainstorm analogies:

Movie Director Analogy: A director who's never been to Paris but read hundreds of books about it can still direct a Paris scene authentically. When you say "we're shooting the Paris scene," they mentally shift into "Paris mode."

Musician Analogy: A jazz musician who learned by reading about music and listening to descriptions can still play - and when you say "play like you're in a smoky New Orleans club," they shift their whole performance style.

Dream Analogy: You've never been to space, but you can dream about it based on descriptions you've read. When someone says "imagine you're floating in space," your brain constructs that experience from indirect knowledge.

Method Actor Analogy: This might be the best one - method actors use sense memory and emotional preparation to inhabit characters. They've never been a detective, but when told "you're a hardboiled detective," they shift their entire bearing.

Radio DJ Analogy: A blind radio DJ who's never seen their listeners but has heard thousands of descriptions of what people look like, how rooms are arranged, how things move - they can still describe visual scenes surprisingly well when prompted to "imagine the visual."

I think the method actor or radio host analogies work best because:

  • Everyone understands actors preparing for roles
  • It captures the "mode switching" aspect
  • It shows how indirect knowledge can be activated
  • It explains why the "prompt" matters

Let me craft a compelling version.

[ANSWER]

Analogies to Explain This Research

The Method Actor Analogy (Best for Most Audiences)

Imagine a method actor who's never been a soldier, but has read hundreds of war memoirs, watched documentaries, and studied military history through text and interviews.

When the director says "Action!":

  • If they just walk onto set cold, they'll act okay
  • But if the director first says "Remember, you're in a foxhole, mud on your face, you can HEAR the artillery in the distance"—suddenly their whole performance shifts
  • Their body language changes, their eyes scan differently, their breathing pattern adapts
  • The sensory prompt activates all that indirect knowledge they absorbed from descriptions

That's what's happening here:

  • The AI learned about images and sounds indirectly through billions of text descriptions
  • Just like the actor never experienced war but absorbed it through stories
  • When you prompt with "imagine seeing..." or "imagine hearing...", you're like the director giving context
  • The AI shifts into "visual mode" or "audio mode" and performs accordingly

The key insight: The actor always had that knowledge—it was latent, learned from text. The director's prompt just activated it.


The Blind Radio Host Analogy (Great for Accessibility)

Imagine a radio host who was born blind but has been describing visual scenes for 20 years. They've never seen a sunset, but they've:

  • Read thousands of descriptions
  • Heard callers describe what they see
  • Learned which colors people associate with emotions
  • Memorized which objects typically appear together

What happens when someone asks them to describe a beach?

  • If you just say "describe this," they might give generic information
  • But if you say "describe what it LOOKS like", they shift into visual mode—they talk about golden sand, blue water, the way light sparkles on waves
  • If you instead say "describe what it SOUNDS like", they shift gears—now it's crashing waves, seagull cries, distant laughter

The surprise: Even though they've never seen anything, their visual descriptions match how sighted people organize visual scenes better than you'd expect—because language itself encodes those patterns.

That's exactly what the AI does: It's "blind" (never saw images), but learned visual patterns from how people write about what they see.


The Synesthesia Analogy (Good for Scientifically-Minded)

Some people have synesthesia—they "hear" colors or "see" music. Their brain creates cross-sensory experiences from single inputs.

These AI models are similar:

  • Feed them text about a "red sports car"
  • With the right prompt, they generate an internal representation that's surprisingly similar to what a vision AI creates when actually seeing a red sports car
  • It's like the text triggers a visual imagination, even though no image exists

The research discovery: You can control which sense gets activated, just by changing the prompt from "imagine seeing" to "imagine hearing."


The Translator Who Never Traveled Analogy (Excellent for International Audiences)

Imagine a translator who's never been to France but has translated 10,000 French novels, menus, travel guides, and poetry.

One day you give them a French description of a café:

  • If you ask "translate this," they give you accurate English words
  • But if you say "translate this as if you're SEEING the café", they suddenly include visual details—the zinc bar, the light through lace curtains, the worn marble tables
  • They've never seen a French café, but they absorbed the visual patterns of how French writers describe them

When you prompt the right way, latent knowledge surfaces.


The "Inner Voice" Analogy (Most Relatable)

You know how you have different "voices" in your head?

  • Your "critical voice" when reviewing your work
  • Your "creative voice" when brainstorming
  • Your "logical voice" when solving problems

Same information in your brain, different modes of processing it.

These AI models are similar:

  • They contain knowledge about images and sounds (learned from descriptions)
  • Normally, that knowledge is blended together
  • But when you prompt with "imagine SEEING...", you activate their "visual voice"
  • When you prompt with "imagine HEARING...", you activate their "audio voice"

The breakthrough: The researchers proved this actually works—the "visual voice" makes the AI organize information more like a vision system, measurably.


The Simple Version (Elevator Pitch)

"It's like discovering that someone who learned to paint from reading art books (never seeing actual paintings) can actually paint surprisingly well—but only if you remind them 'paint what you SEE' vs 'paint what you FEEL.' The prompting activates different knowledge they didn't know they had."


The "Hidden Talent" Analogy (Most Memorable)

You know that person who says "I'm not musical"—but then at karaoke after a few drinks and some encouragement, they suddenly belt out a perfect rendition of Bohemian Rhapsody?

The talent was always there. They just needed the right context to bring it out.

That's what sensory prompting does:

  • The AI "doesn't know" it can think visually (it was only trained on text)
  • But with the right prompt, that latent ability surfaces
  • It was there all along, encoded in the patterns of how people describe what they see

Which Analogy to Use When:

  • General audience: Method Actor or Hidden Talent
  • Business audience: Translator or Inner Voice
  • Technical but non-AI: Synesthesia or Radio Host
  • Need something memorable: "Paint from art books" or Hidden Talent
  • Need credibility: Radio Host (most concrete/verifiable)

The core message all convey: Knowledge learned indirectly can still be activated directly—you just need to know what to ask for.


QUESTION 4:

Can you provide a specific example that best illustrates the key points elucidated in this paper?

ANSWER 4:

[THINKING]

The user wants a specific, concrete example that illustrates the key findings of the paper. Let me look for the best example from the paper itself that demonstrates:

  1. How sensory prompting changes representations
  2. The mutual k-NN alignment concept
  3. How this produces different, modality-appropriate outputs
  4. The measurable difference in alignment

The paper has a great example in Figure 5 and Section B.1 about "Nasi goreng Pattaya" - this is probably the clearest illustration of the core mechanism.

Let me walk through this example step-by-step to show:

  • What the caption is
  • What the model generates under "no cue"
  • What the model generates under "SEE cue"
  • How the nearest neighbors change
  • Why this demonstrates alignment with vision encoders
  • The actual alignment scores

This example perfectly shows:

  1. The generative representation concept (generating text changes the representation)
  2. The sensory prompting effect (SEE cue shifts behavior)
  3. The mutual k-NN alignment (different neighbors = different geometry)
  4. Why this matters (visual descriptions → vision-like organization)

[ANSWER]

A Concrete Example: The "Nasi Goreng Pattaya" Case

Let me walk through the paper's best illustrative example that shows exactly how this works:

The Setup

Starting caption: "Nasi goreng Pattaya (Pattaya fried rice), a local delicacy from Pattaya, Thailand. It is fried rice served in a pouch of omelette, commonly found in Indonesia, Malaysia, Singapore and Thailand."

The test: Feed this caption to Qwen3-32B under two conditions:

  1. No cue: "Imagine: {caption}"
  2. SEE cue: "Imagine what it would look like to see {caption}"

Then measure: Do the generated representations align more with how a vision model (DINOv2) organizes actual images of food?


What the Model Generates

Under "No Cue" (Geographic/Cultural Mode)

The model generates:

"Nasi goreng Pattaya is a local delicacy from Pattaya, Thailand, but it's also popular in neighboring countries. It's a variation of fried rice that's often served with..."

Focus: Geographic origins, cultural context, regional variations

Under "SEE Cue" (Visual Mode)

The model generates:

"I need to visualize this dish. Let me break down the main components: fried rice, omelette, and the sauce. The fried rice would be brown from the soy sauce and cooking, maybe with small bits of vegetables mixed in. The omelette wraps around it like a pouch—bright yellow, possibly with some browning on the edges. There might be toppings like shrimp, chicken, or vegetables visible..."

Focus: Colors, shapes, textures, visual composition—what you'd actually SEE


The Critical Measurement: Who Are the "Neighbors"?

Here's where it gets interesting. They use mutual k-nearest neighbors to measure representational similarity.

No Cue Neighbors (Geographic clustering)

The 10 most similar captions in the representation space are:

  1. "Yam thale" (another Thai dish)
  2. "Lankascincus gansi" (a skink species from Sri Lanka)
  3. "Korean-Chinese Cultural Center in Incheon"
  4. Other geographic/cultural references

What happened: The model organized information by geographic/cultural similarity—Thailand, neighboring countries, Southeast Asian references.

SEE Cue Neighbors (Visual clustering)

The 10 most similar captions are now:

  1. "Blinchiki filled with cheese and topped with blackberries"
  2. "Spaghetti topped with pulled pork in a marinara sauce"
  3. "Fried rice dish with vegetables"
  4. Other food dishes with visible components

What happened: The model reorganized to cluster by visual similarity—dishes with similar colors, compositions, wrapped/topped presentations.


Why This Matters: Alignment with Vision Models

Now they check: How does a real vision model (DINOv2) organize actual images of these foods?

Result: The vision model ALSO groups foods by visual appearance (colors, plating, textures), not by geographic origin.

Alignment Scores

  • No Cue → Vision Model overlap: 3 out of 10 neighbors match
  • SEE Cue → Vision Model overlap: 7 out of 10 neighbors match

The SEE cue increased alignment by >130% just by changing the prompt.


What This Demonstrates

1. Different Prompts = Different Internal Representations

The same caption produces fundamentally different "mental models":

  • No cue: Encyclopedic knowledge graph (countries, cultures, cuisines)
  • SEE cue: Perceptual feature space (colors, shapes, compositions)

2. Generative Process Matters

The alignment happens during generation:

  • As the model generates "brown from soy sauce," "bright yellow omelette," "bits of vegetables"
  • Each token's hidden state accumulates visual feature information
  • The averaged representation becomes more vision-like

3. Text-Only Model Behaves Like Vision Model

Without ever seeing an image:

  • The LLM learns that "wrapped in omelette" is visually similar to "topped with sauce"
  • It clusters foods by appearance, just like DINOv2 does
  • This similarity structure was latent in the text training, activated by the prompt

4. Controllable and Causal

Change the prompt → change the representation geometry → change which neighbors match

This isn't correlation—it's a causal intervention.


The Contrasting Example: When It Backfires

The paper also shows when this DOESN'T help:

Caption: "MOLA map of Suess" (a Mars topographic map)

No Cue (Stays Technical)

"MOLA stands for Mars Orbiter Laser Altimeter... it created topographic maps of Mars by measuring laser pulse reflections..."

Neighbors: Other Mars maps, crater images, planetary topography

SEE Cue (Goes Off-Track)

"Suess... is that referring to Narnia? Or maybe The Hitchhiker's Guide to the Galaxy? Or Dr. Seuss the author? Let me think about what those would look like..."

Neighbors: Random images of authors, fictional landscapes, completely unrelated scenes

Alignment drops by 40%—the visual prompt introduced ambiguity that pulled the representation away from the actual technical/spatial content.


The Practical Takeaway

Imagine you're building a food search engine:

Traditional Approach

  • Train a vision model on millions of food images
  • Encode all your food photos
  • Match queries to images
  • Cost: Millions in training, expensive inference

This Approach

  • Use text-only LLM with food descriptions
  • Add "imagine seeing..." to search queries
  • Get vision-aligned representations from text alone
  • Cost: Just prompt engineering

The food search will cluster dishes by visual similarity (what matters for search) rather than by cuisine type (what the text emphasizes).


The "Aha!" Moment

Here's the key insight this example reveals:

The model already "knew" that:

  • Wrapped foods look similar to other wrapped foods
  • Yellow and brown are the colors of fried rice with egg
  • Texture and composition matter for visual similarity

It learned this from text alone—because people describe what they see in consistent ways. The prompt just told the model: "Hey, organize your knowledge according to VISUAL similarity right now, not geographic similarity."

And it worked.


In One Sentence

When you tell a text-only AI to "imagine seeing" fried rice, it shifts from grouping foods by country-of-origin to grouping them by visual appearance—matching how a real vision AI organizes actual photos—and you can measure this shift by checking which other foods become "neighbors" in the representation space.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment