Skip to content

Instantly share code, notes, and snippets.

Using local LLMs anywhere (in text editor) - example below with Obsidian

inspired by and adopted from LLM-automator.

Code example with mixtral

image
automator-llm-code.mp4
@iamaziz
iamaziz / list_local_ollama_models.py
Created January 14, 2024 07:03
List all Ollama model names installed on the local machine.
import re
import subprocess
def _parse_names(data: str) -> list[str]:
"""
Parses names from a multi-line string where each line contains a name and other details.
Parameters:
data (str): A multi-line string containing names and other details.
@iamaziz
iamaziz / trying_crewai-arabic-llm.py
Created January 13, 2024 02:01
Trying out Agents/Tools with Arabic LLMs
# basedk on: https://github.com/joaomdmoura/crewAI#getting-started
from crewai import Agent, Task, Crew
from langchain_community.llms import Ollama
from langchain_community.tools import DuckDuckGoSearchRun
# -- model
# ollama_llm = Ollama(model="arabic_deepseek-llm")
# ollama_llm = Ollama(model="arabic_notux")
ollama_llm = Ollama(model="arabic_mixtral")
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iamaziz
iamaziz / Jais-LLM-GettingStarted-v2.ipynb
Last active April 16, 2024 16:59
Running Jais LLM on M3 Max chip with 64GB - using `torch_dtype=torch.float16`. Now much faster but way off
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iamaziz
iamaziz / Jais-LLM-GettingStarted.ipynb
Created December 30, 2023 17:16
Running Jais LLM on M3 Max chip with 64GB - for some reason it's very slow and too big for a 13b model
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iamaziz
iamaziz / pytorch_device.py
Created December 30, 2023 06:57
Enable MacOS M-series chip for PyTorch
import torch
# Check if CUDA is available, else check for MPS, otherwise default to CPU
if torch.cuda.is_available():
device = torch.device("cuda") # GPU
elif torch.backends.mps.is_available():
device = torch.device("mps") # MacOS M-series chip
else:
device = torch.device("cpu") # CPU
@iamaziz
iamaziz / LLM-from-scratch.ipynb
Created July 23, 2023 17:02
Building a large language model (LLM) from scratch (for learning and fun - inspired by Llama2).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iamaziz
iamaziz / tf_practice_mnist_nn_normalized_vs-_non-normalized-data.ipynb
Created November 14, 2020 02:37
tf_practice_MNIST_nn_normalized_vs._non-normalized-data
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@iamaziz
iamaziz / tf_practice_1_mini_linear_nn.ipynb
Created November 14, 2020 01:09
tf_practice_1_mini_linear_nn.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.