mkdir docs/
cd docs/pip install sphinx sphinx-rtd-theme myst-parser recommonmark| #!/bin/bash | |
| # Backup existing repo files | |
| echo "Creating backup of existing repo files..." | |
| cd /etc/yum.repos.d/ | |
| mkdir -p backup | |
| mv CentOS-* backup/ | |
| # Create new CentOS-Base.repo file | |
| echo "Creating new CentOS-Base.repo file..." |
import warnings
from fastapi import FastAPI
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from pydantic import BaseModel
from typing import List
warnings.filterwarnings('ignore')
app = FastAPI()note that %21RotqTsIrBvXaPAbhiC%3Atwkl.chat is the room_id after url encoding it using this tool
curl --location --request PUT 'http://localhost:8008/_matrix/client/r0/pushrules/global/room/%21RotqTsIrBvXaPAbhiC%3Atwkl.chat' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <user_token>' \
--data '{| ''' | |
| Steps for making a RAG system in Langchain framework | |
| 1. Prepare the data (knowledge base) | |
| 2. Use the Loader object to load the data in the proper format | |
| 3. Chunk the data into appropriate size | |
| 4. Create Embedding and Retriever | |
| 4.1. Use Embedding model like BAAI/bge-base-en-v1.5 from HuggingFace | |
| 4.2. Create a Vector database like FAISS (Facebook AI Similarity Search). And the Retriever is a an object from the DB class. E.g. `retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 4})` |
| apt install python3 python3-pip wget curl unzip portaudio19-dev virtualenv libespeak1 libespeak-dev -y | |
| cd /opt | |
| wget https://github.com/lamoboos223/demo-faster-whisper/archive/refs/heads/master.zip | |
| unzip master.zip | |
| rm master.zip | |
| cd demo-faster-whisper-master | |
| virtualenv myenv | |
| source myenv/bin/activate | |
| python3 -m pip install -r requirements.txt | |
| python3 __init__.py |
install on linux
curl -fsSL https://ollama.com/install.sh | sh