Using local LLMs anywhere (in text editor) - example below with Obsidian
inspired by and adopted from LLM-automator.
Code example with
mixtral
Using local LLMs anywhere (in text editor) - example below with Obsidian
inspired by and adopted from LLM-automator.
Code example with
mixtral
| import re | |
| import subprocess | |
| def _parse_names(data: str) -> list[str]: | |
| """ | |
| Parses names from a multi-line string where each line contains a name and other details. | |
| Parameters: | |
| data (str): A multi-line string containing names and other details. |
| # basedk on: https://github.com/joaomdmoura/crewAI#getting-started | |
| from crewai import Agent, Task, Crew | |
| from langchain_community.llms import Ollama | |
| from langchain_community.tools import DuckDuckGoSearchRun | |
| # -- model | |
| # ollama_llm = Ollama(model="arabic_deepseek-llm") | |
| # ollama_llm = Ollama(model="arabic_notux") | |
| ollama_llm = Ollama(model="arabic_mixtral") |
| import torch | |
| # Check if CUDA is available, else check for MPS, otherwise default to CPU | |
| if torch.cuda.is_available(): | |
| device = torch.device("cuda") # GPU | |
| elif torch.backends.mps.is_available(): | |
| device = torch.device("mps") # MacOS M-series chip | |
| else: | |
| device = torch.device("cpu") # CPU |