download and install Ollama. Running a particular LLM is as simple as ollama run llama3, which will pull the corresponding model to local. Using Ollama with Obsidian Obsidian Copilot Vault QA. Using