Skip to content

Instantly share code, notes, and snippets.

@tgaff
Last active March 6, 2024 07:39
Show Gist options
  • Save tgaff/e316ba47fe727aa13647e40ac1623304 to your computer and use it in GitHub Desktop.
Save tgaff/e316ba47fe727aa13647e40ac1623304 to your computer and use it in GitHub Desktop.
Using a remote system to serve AI Code helper with LM Studio & VS Code Continue extension (LLM)

I'm using a Windows machine with an RTX 3070 for this.

on the windows side

  1. install LM Studio
  2. install and start OpenSSH server
  3. download a model - ideally a code related one
  4. from the server tab a) select the model at top b) accept the updated prompt if needed c) click Start Server

On the development machine

First from your development machine SSH into the other machine and port-forward the LM Studio port

ssh -L 1234:127.0.0.1:1234 desktop-e3rmgvg\\[email protected]

be sure to adjust machine name, user name and IP get machine name and user name from a non-admin powershell prompt with whoami get the IP from ipconfig

Add something like the following to the continue config.

    {
      "title": "LM STUDIO3",
      "provider": "openai",
      "model": "deepseek-ai_deepseek-coder-6.7b-base",
      "apiBase": "http://localhost:1234/v1/"
    },

Install the continue extension in VS Code

Read the docs for usage https://continue.dev/docs/how-to-use-continue#introduction in short though:

  1. highlight code
  2. cmd+shift+m
  3. ask a question
  4. to have it edit code prepend your question with /edit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment