Skip to content

Instantly share code, notes, and snippets.

@davidmezzetti
Created June 30, 2024 18:38
Show Gist options
  • Save davidmezzetti/9d83d01d9ff68ede42dd1e65efe10018 to your computer and use it in GitHub Desktop.
Save davidmezzetti/9d83d01d9ff68ede42dd1e65efe10018 to your computer and use it in GitHub Desktop.
from txtai import Embeddings, LLM
# Data to index
data = [
"US tops 5 million confirmed virus cases",
"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg",
"Beijing mobilises invasion craft along coast as Taiwan tensions escalate",
"The National Park Service warns against sacrificing slower friends in a bear attack",
"Maine man wins $1M from $25 lottery ticket",
"Make huge profits without work, earn up to $100,000 a day"
]
# Vector store with embeddings via local Ollama server
embeddings = Embeddings(path="ollama/all-minilm", content=True)
embeddings.index(data)
# LLM via local Ollama server
llm = LLM(path="ollama/mistral-openorca")
# Question and context
question = "funny story"
context = "\n".join(x["text"] for x in embeddings.search(question))
# RAG
llm([
{"role": "system",
"content": "You are a friendly assistant. You answer questions from users."},
{"role": "user",
"content": f"""
Answer the following question using only the context below. Only include information
specifically discussed.
question: {question}
context: {context}
"""}
])
# A funny story from the given context is that the National Park Service warned people not to
# sacrifice their slower friends in a bear attack
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment