Skip to content

Instantly share code, notes, and snippets.

@chuanqichen
chuanqichen / main.py
Created July 1, 2024 21:35 — forked from gswithjeff/main.py
Example showing how to use Chroma DB and LangChain to store and retrieve your vector embeddings
# https://www.gettingstarted.ai
# email: jeff@gettingstarted.ai
# written by jeff
# Don't forget to set your OPEN_AI_KEY
# In your terminal execute this command: export OPENAI_API_KEY="YOUR_KEY_HERE"
# Import required modules from the LangChain package
from langchain.chains import RetrievalQA
from langchain.vectorstores import Chroma
@chuanqichen
chuanqichen / main.py
Created July 1, 2024 21:33 — forked from gswithjeff/main.py
Updated for LlamaIndex v0.10+: Building a RAG pipeline using LlamaIndex, Google Gemini Pro, and Pinecone
import os
from pinecone import Pinecone
from llama_index.llms.gemini import Gemini
from llama_index.vector_stores.pinecone import PineconeVectorStore
from llama_index.embeddings.gemini import GeminiEmbedding
from llama_index.core import StorageContext, VectorStoreIndex, download_loader
from llama_index.core import Settings
GOOGLE_API_KEY = "YOUR_API_KEY"