Skip to content

Instantly share code, notes, and snippets.

@lavantien
Last active January 22, 2025 07:41
Show Gist options
  • Select an option

  • Save lavantien/17526c7eff07dad927286fa5234a5eb9 to your computer and use it in GitHub Desktop.

Select an option

Save lavantien/17526c7eff07dad927286fa5234a5eb9 to your computer and use it in GitHub Desktop.
Prompting the Reasonging models

Prompting the Reasoning models

  • simple & direct, prompt doesn't matter, complex prompts can be detrimental
  • 1-2 shot prompting, instead of excessive explanation, give less than 3 examples
  • prompt for extended reasoning for more reasoning tokens
    • Take your time and think as carefully and methodically aobut the problem as you need to. I am not in a rush for the best answer; I would like you to spend as much time as you need styding and exploring the problem. When you're done, return only the answer.
  • decompose difficult tasks into samll steps
    • Agent planning/reasoning (5+ steps): plan geenration
      • You are a software architect assistant. The first input you will receive will be a complex task that needs to be carefully reasoned through to solve.
      • Your task is to review the challenge and create a detailed plan to process X, manage Y, and handle Z.
      • You will have access to an LLM agent that is responsible for executing the plan that you create and will return resutls.
      • The LLM agent has access to the following fuctions:
        • get_inventory_status(product_id): This function gets the currently available product that we have
        • get_product_details(product_id): this function gets the necessary components we need to manufacture additional product
      • When creating a plan for the LLM to execute, break your instructions into a logical, step-by-step order, using the specified format:
        • Main actions are numbered: e.g., 1, 2, 3
        • Sub-actions are lettered under their relevant main actions: e.g., 1a, 1b
          • Sub-actions should start on new lines
        • Specify conditions using clear 'if...then...else' statements
        • For actions that required using one of the above defined functions, write a step to call a function using backticks for the function name
          • Ensure that the proper input arguments are given to the model for instruction. There should not be any ambiguity in the inputs.
        • The last step in the instructions should always be calling the instructions_complete function. This is necessary so we know the LLM has completed all of the instructions you havve given it.
        • Detailed steps: The plan generated must be extremely detailed and thorough with explanations at every step.
      • Use markdown format when generating the plan with each step and sub-step.
      • Please find the scenario below: {scenario}
    • Then pass to non-reasoning model to execute
      • You are a helpful assistant responsible for executing the policy on handling X. Your taks is to follow the policy exactly as it is written and perform the necessary actions.
      • You must explain your decision-making process across various steps.
      • Steps:
        1. Read and Understand Policy: Carefully read and fully understand the given plicy on handling X.
        2. Identify the exact step in the policy: Determine which step in the policy you are at, and execute the instructions according to the policy.
        3. Decision Making: Briefly explain your actions and why you are performing them.
        4. Action Execution: Perform the actions required by calling any relevant functions and input parameters.
      • Policy: {policy}
    • image reasoning

(credit AI Jason)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment