While these terms are often used interchangeably in prompt engineering, they trigger fundamentally different pathways in a Transformer’s latent space.
- Instruction Extraction (Syntactic): The model operates as a filter. It identifies imperative verbs and procedural markers. It stays "close" to the surface of the text.
- Intent Synthesis (Teleological): The model operates as a reasoner. It must compress the entire context to find a "hidden" state or goal. This requires higher global attention.
Your scenario of parsing text into variables via specific formats (JSON vs. Pseudo-code) acts as a control valve for these behaviors.