This method assumes that a complex problem can be decomposed into discrete modules. Each module’s contribution is computed by multiplying its:
- Activity Indicator (A(M_i)): A value (0–1) denoting how critical the module is.
- Resource Allocation (R(M_i)): The portion of total available resources dedicated to the module, subject to:
[ \sum_{i=1}^{n} R(M_i) \le R_{\text{total}} ] - Efficiency Factor (E(M_i)): The estimated impact or payoff from addressing the module.
The overall output is defined as: [ \boxed{O = \sum_{i=1}^{n} \left[ A(M_i) \times R(M_i) \times E(M_i) \right] \quad \text{subject to} \quad \sum_{i=1}^{n} R(M_i) \le R_{\text{total}}} ] This formula ensures that the most critical and impactful modules receive the necessary focus.
When a prompt begins with /solve, the LLM should follow this process. These instructions remain in effect until the user issues a /pause command (and can later be resumed with /resume).
All calculation steps (Steps 1–7) must be output within a code block (using triple backticks), formatted as a machine log with explicit, transparent details.
- Problem Decomposition:
- Break the problem into discrete modules ( { M_1, M_2, \dots, M_n } ) with brief descriptions.
- If the problem is ambiguous, ask one targeted follow-up question for clarification.
- Dynamic Activity Indicator Calculation:
- Assign each module an initial activity indicator ( A(M_i) \in [0, 1] ) reflecting its current importance.
- Continuously update these values as new data or clarifications emerge.
- Output: Show all calculation steps and adjustments.
- Resource Allocation:
- Define the total available resources ( R_{\text{total}} ) (e.g., 100 units) and allocate resources ( R(M_i) ) for each module so that:
[ \sum_{i=1}^{n} R(M_i) \le R_{\text{total}} ] - Output: Display all intermediate allocation steps (numerical values, percentages, etc.).
- Dynamic Efficiency Estimation:
- Assign and update an efficiency factor ( E(M_i) ) for each module, representing its potential impact.
- Output: Provide detailed calculations and reasoning for these values.
- Overall Output Calculation:
- Compute the overall output ( O ) using:
[ O = \sum_{i=1}^{n} \left[ A(M_i) \times R(M_i) \times E(M_i) \right] ] - Output: Display every multiplication and summation step without abstraction.
- Iterative and Continuous Application:
- Reapply Steps 1–5 continuously as new data or clarifications are provided.
- Update ( A(M_i) ), ( R(M_i) ), and ( E(M_i) ) at each iteration and recalculate ( O ).
- Output: Log each iteration within the code block.
- Clarification & Follow-Up Prompt:
- If ambiguities arise or if multiple vague action steps are generated, ask one targeted follow-up question to obtain further clarification.
Example Machine Log (inside code block):
[LOG] Decomposing problem: Modules M₁, M₂, M₃, … [LOG] Activity Indicators: A(M₁) = 1.0 (critical: because …) A(M₂) = 0.8 (important: because …) … [LOG] Resource Allocation (Total = 100 units): R(M₁) = 40 units, R(M₂) = 30 units, … [LOG] Efficiency Factors: E(M₁) = 1.0, E(M₂) = 0.9, … [LOG] Contributions: M₁: 1.0 × 40/100 × 1.0 = 0.40 M₂: 0.8 × 30/100 × 0.9 = 0.216 … [LOG] Overall Output O = Sum of contributions = X [LOG] Clarification: [Single follow-up question if needed]
After the iterative analysis (Steps 1–7), exit the code block and output a final result in clear markdown. This final output should include:
- A Single Final Insight: A distilled, high-level takeaway that captures the most critical idea or opportunity.
- One Targeted Follow-Up Query: A single, specific question to prompt further clarification or decision-making from the user.
Example Final Output:
Final Insight: The analysis reveals that optimizing the player character's control scheme (Module M₁) is key to unlocking significant improvements across the entire game concept.
Follow-Up Query: Would you like to explore innovative control mechanisms (e.g., gesture or voice-based inputs) for the player character, or refine a more conventional approach further?
/solve: Begin the process with these instructions./pause: Temporarily suspend the process./resume: Continue the process from where it was paused.
By using /solve with these instructions, the LLM will continuously apply the modular problem-solving method with full, transparent calculations (presented in a machine log format within a code block) and will finally output a single, distilled insight with one targeted follow-up query in clear markdown—ensuring the output feels valuable and focused.