This document outlines the necessary changes to modify Loom's OpenAI API implementation to support local Language Models (LLMs) that use the OpenAI API specification. This modification will allow users to interact with local LLMs using the same interface as the official OpenAI API.
- Allow users to specify a custom base URL for OpenAI API calls
- Maintain compatibility with the existing OpenAI implementation
- Provide a seamless way to switch between official OpenAI models and local LLMs
- Minimize changes to the existing codebase
File: common.ts
Add a new baseUrl
field to the ModelPreset
interface:
export interface ModelPreset<P extends Provider> {
name: string;
provider: P;
model: string;
contextLength: number;
apiKey: string;
baseUrl?: string; // New field for custom base URL
// ... other existing fields
}
File: main.ts
Update the completeOpenAI
method to accept and use a custom base URL:
import { OpenAIApi, Configuration } from 'openai';
// ... existing imports and code
export default class LoomPlugin extends Plugin {
// ... existing code
async completeOpenAI(prompt: string): Promise<CompletionResult> {
const preset = getPreset(this.settings) as ModelPreset<"openai">;
const openai = new OpenAIApi(new Configuration({
apiKey: preset.apiKey,
basePath: preset.baseUrl || undefined, // Use custom base URL if provided
}));
try {
const response = await openai.createCompletion({
model: preset.model,
prompt: prompt,
max_tokens: this.settings.maxTokens,
temperature: this.settings.temperature,
top_p: this.settings.topP,
frequency_penalty: this.settings.frequencyPenalty,
presence_penalty: this.settings.presencePenalty,
});
return {
ok: true,
completions: response.data.choices.map(choice => choice.text || ""),
};
} catch (error) {
console.error("OpenAI API error:", error);
return {
ok: false,
status: error.response?.status || 500,
message: error.message,
};
}
}
// ... rest of the class
}
File: settings.ts
Add a new setting for the custom base URL in the OpenAI preset section:
import { App, PluginSettingTab, Setting } from 'obsidian';
import LoomPlugin from './main';
export class LoomSettingTab extends PluginSettingTab {
plugin: LoomPlugin;
constructor(app: App, plugin: LoomPlugin) {
super(app, plugin);
this.plugin = plugin;
}
display(): void {
const { containerEl } = this;
containerEl.empty();
// ... existing settings code
const currentPreset = this.plugin.settings.modelPresets[this.plugin.settings.modelPreset];
if (currentPreset.provider === "openai") {
new Setting(containerEl)
.setName("OpenAI Base URL")
.setDesc("Enter a custom base URL for local LLMs (optional)")
.addText(text => text
.setPlaceholder("https://api.openai.com/v1")
.setValue(currentPreset.baseUrl || "")
.onChange(async (value) => {
currentPreset.baseUrl = value || undefined;
await this.plugin.saveSettings();
}));
}
// ... rest of the settings
}
}
File: main.ts
In the onload
method of the LoomPlugin
class, add a new preset for local LLMs:
async onload() {
// ... existing onload code
const localLLMPreset: ModelPreset<"openai"> = {
name: "Local LLM (OpenAI-compatible)",
provider: "openai",
model: "local-model", // This should match the model name expected by your local LLM
contextLength: 4096, // Adjust as needed
apiKey: "dummy-key", // Your local LLM might not need a real API key
baseUrl: "http://localhost:8000/v1", // Default local URL, can be changed in settings
};
this.settings.modelPresets.push(localLLMPreset);
await this.saveSettings();
// ... rest of onload
}
File: main.ts
Modify the generate
method to pass the baseUrl
to the completion method:
async generate(file: TFile, rootNode: string | null) {
// ... existing preparation logic
const completionMethods: Record<Provider, (prompt: string) => Promise<CompletionResult>> = {
openai: (prompt) => this.completeOpenAI(prompt),
// ... other providers
};
const preset = getPreset(this.settings);
const result = await completionMethods[preset.provider](prompt);
// ... handle result and create nodes
}
Note that we don't need to modify the generate
method signature or the completionMethods
object, as the baseUrl
is already included in the preset and will be used within the completeOpenAI
method.
To ensure the modification works correctly, perform the following tests:
-
Unit Tests:
- Create unit tests for the
completeOpenAI
method, mocking the OpenAI API responses. - Test with both default and custom base URLs.
- Create unit tests for the
-
Integration Tests:
- Set up a local LLM server that implements the OpenAI API spec (e.g., using LocalAI).
- Create an integration test that uses the local LLM preset and verifies the correct base URL is used.
-
Manual Testing:
- Install the modified plugin in Obsidian.
- Configure a preset with a custom base URL pointing to a local LLM.
- Perform completions and verify that requests are sent to the correct URL.
- Switch between official OpenAI and local LLM presets to ensure smooth transitions.
- Update the plugin version in
manifest.json
. - Create a new release on the GitHub repository.
- Update the plugin in the Obsidian Community Plugins directory.
Update the plugin's documentation to include:
- Instructions on how to set up and use local LLMs with the plugin.
- Explanation of the new "OpenAI Base URL" setting.
- Any known limitations or considerations when using local LLMs.
- Implement a way to easily share and import custom LLM configurations