seed
[
{
"category": "Physical and Spatial Reasoning",
"overview": "Large language models (LLMs), especially transformer-based models, typically struggle with physical and spatial reasoning due to their associative rather than causal or simulation-based internal representations. They lack grounded understanding or internal simulations of real-world physics, instead relying solely on statistical associations learned from textual data. Without explicit mental models or sensory experiences of spatial relations, gravity, friction, containment, and object permanence, LLMs default to pattern-based associations and linguistic heuristics rather than accurate physical logic. Thus, when confronted with scenarios that require concrete reasoning about physical interactions, spatial positioning, or hidden-object inference, LLMs often provide incorrect or illogical responses.\n\nThis limitation arises fundamentally because LLMs do not possess innate spatial or physical intuitions, nor do they internally simu