I experimented with an LLM to help me plan a fictitious work schedule. This experiment was to learn how much the LLM could correctly assume about the unsupplied details and how it would succeed with an increasing number of explicitly stated details.
What I found is that the LLM gave the impression that it is built with a stronger leaning to please than it was in being accurate. Yet, this impression could be simply my personal take and not fact. It could be that the LLM is just not very "intelligent" at all, and only follows the lead of the prompts to take another stab at correctness without having much of