Skip to content

Instantly share code, notes, and snippets.

@soham2008xyz
Last active November 10, 2024 14:03
Show Gist options
  • Save soham2008xyz/6ded5077d2fe8701486f464c50fe8120 to your computer and use it in GitHub Desktop.
Save soham2008xyz/6ded5077d2fe8701486f464c50fe8120 to your computer and use it in GitHub Desktop.
iTerm2 Ollama Integration

iTerm2 Ollama Integration

A guide on using Ollama as the OpenAI API provider for inline completions in iTerm2.

Configuration

  • API URL: http://127.0.0.1:11434/v1/completions
  • Model: mistral
  • Tokens: 4000
  • Use legacy completions API: true

Prompt

Return commands suitable for copy/pasting into \(shell) on \(uname).

The script should do this: \(ai.prompt)

IMPORTANT: Do NOT include any instructions, bullet points, commentary NOR Markdown triple-backtick code blocks as your whole response will be copied into my terminal automatically. ONLY return the command without enclosing it in a Markdown code block.
@bsahane
Copy link

bsahane commented Nov 6, 2024

I am using below,

  • Enable Generative AI Features
    True
  • OpenAI API Key:
    ollama
    
  • API URL:
    http://localhost:11434/v1/chat/completions
    
  • Model:
    llama3.2:latest
    
  • Tokens:
    99,999
    
  • Use legacy completions API:
    False

@FreeBLD
Copy link

FreeBLD commented Nov 10, 2024

@bsahane tried your config but used the mistral:latest for the model. Works like a charm!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment