Skip to content

Instantly share code, notes, and snippets.

@KuRRe8
Last active May 9, 2025 02:06
Show Gist options
  • Save KuRRe8/36f63d23ef205a8e02b7b7ec009cc4e8 to your computer and use it in GitHub Desktop.
Save KuRRe8/36f63d23ef205a8e02b7b7ec009cc4e8 to your computer and use it in GitHub Desktop.
和Python使用有关的一些教程,按类别分为不同文件

Python教程

Python是一个新手友好的语言,并且现在机器学习社区深度依赖于Python,C++, Cuda C, R等语言,使得Python的热度稳居第一。本Gist提供Python相关的一些教程,可以直接在Jupyter Notebook中运行。

  1. 语言级教程,一般不涉及初级主题;
  2. 标准库教程,最常见的标准库基本用法;
  3. 第三方库教程,主要是常见的库如numpy,pytorch诸如此类,只涉及基本用法,不考虑新特性

其他内容就不往这个Gist里放了,注意Gist依旧由git进行版本控制,所以可以git clone 到本地,或者直接Google Colab\ Kaggle打开相应的ipynb文件

直接在网页浏览时,由于没有文件列表,可以按Ctrl + F来检索相应的目录,或者点击下面的超链接。

想要参与贡献的直接在评论区留言,有什么问题的也在评论区说 ^.^

目录-语言部分

目录-库部分

目录-具体业务库部分-本教程更多关注机器学习深度学习内容

Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LlamaIndex - 面向 LLM 的数据框架教程 (RAG 重点)\n",
"\n",
"欢迎来到 LlamaIndex 教程!LlamaIndex (曾用名 GPT Index) 是一个开源的数据框架,专门用于将自定义数据源连接到大型语言模型 (LLM),从而构建强大的**检索增强生成 (Retrieval-Augmented Generation, RAG)** 应用程序。\n",
"\n",
"与 LangChain 类似,LlamaIndex 也旨在简化 LLM 应用开发,但它**特别专注于数据的摄入 (Ingestion)、索引 (Indexing) 和查询 (Querying)** 过程,为构建高效、准确的 RAG 系统提供了丰富的高级工具和策略。\n",
"\n",
"**为什么使用 LlamaIndex?**\n",
"\n",
"1. **强大的 RAG 功能**: 提供了从数据加载、解析、索引到复杂查询策略的全套 RAG 工具链。\n",
"2. **多样化的数据连接器 (Readers/Loaders)**: 支持从各种来源(PDF, API, 数据库, Notion, Slack 等)加载数据。\n",
"3. **灵活的索引结构**: 支持多种索引类型(向量索引、列表索引、关键词表索引、树索引等),适用于不同场景。\n",
"4. **高级查询引擎**: 提供了超越简单相似性搜索的查询策略,如多步查询、联合查询、带综合的查询等。\n",
"5. **与 LLM 和 Embedding 模型解耦**: 可以灵活搭配不同的 LLM 和 Embedding 模型。\n",
"6. **可观测性与评估**: 集成了调试和评估 RAG 应用的工具。\n",
"7. **与 LangChain 可集成**: 两者可以相互配合使用。\n",
"\n",
"**核心概念概览:**\n",
"* **Documents**: 数据的容器,可以是文本文件、PDF 或其他来源的数据。\n",
"* **Nodes**: 文档被解析成的原子数据单元(通常是文本块),包含文本和元数据。\n",
"* **Readers/Loaders**: 用于从数据源加载数据并创建 `Document` 对象。\n",
"* **Indexes**: 根据 Nodes 构建的数据结构,用于高效检索。\n",
"* **Embeddings**: 将文本转换为向量表示的模型。\n",
"* **Retrievers**: 从索引中根据查询检索相关 Nodes 的组件。\n",
"* **Node Postprocessors**: 对检索到的 Nodes 进行重新排序或过滤。\n",
"* **Response Synthesizers**: 根据检索到的上下文和原始查询生成最终答案。\n",
"* **Query Engines**: 封装了从查询到生成响应的端到端逻辑 (Retrieval -> Postprocessing -> Synthesis)。\n",
"* **Chat Engines**: 用于构建基于索引数据的对话式应用。\n",
"\n",
"**本教程将涵盖 LlamaIndex 的核心 RAG 工作流程:**\n",
"\n",
"1. 安装与设置 (API Keys)\n",
"2. 数据加载 (SimpleDirectoryReader)\n",
"3. 文档解析与节点创建 (NodeParser)\n",
"4. 服务上下文 (ServiceContext) 与模型配置 (LLM, Embeddings)\n",
"5. 构建索引 (VectorStoreIndex)\n",
"6. 创建查询引擎 (Query Engine)\n",
"7. 进行查询与获取响应\n",
"8. (简介) 创建聊天引擎 (Chat Engine)\n",
"9. (简介) 保存与加载索引"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. 安装与设置\n",
"\n",
"安装 LlamaIndex 核心库。根据你使用的 LLM 和 Embedding 模型,可能需要安装额外的库 (如 `openai`, `transformers`, `sentence-transformers`)。\n",
"\n",
"```bash\n",
"pip install llama-index\n",
"\n",
"# 如果使用 OpenAI 模型 (推荐开始)\n",
"pip install openai python-dotenv\n",
"\n",
"# 如果使用 HuggingFace 模型 (可选)\n",
"# pip install transformers torch sentence-transformers accelerate # accelerate for faster loading\n",
"```\n",
"\n",
"**设置 API Keys**: 与 LangChain 类似,将你的 OpenAI API Key (或其他 LLM Key) 存储在环境变量或 `.env` 文件中。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import llama_index\n",
"import os\n",
"from dotenv import load_dotenv\n",
"import logging\n",
"import sys\n",
"\n",
"# 配置日志记录,方便观察 LlamaIndex 内部流程 (可选)\n",
"# logging.basicConfig(stream=sys.stdout, level=logging.INFO) # INFO level\n",
"# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))\n",
"\n",
"# 尝试加载 .env 文件\n",
"load_success = load_dotenv() \n",
"print(f\"LlamaIndex version: {llama_index.__version__}\")\n",
"print(f\".env file loaded: {load_success}\")\n",
"\n",
"# 检查 OpenAI API Key\n",
"openai_api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"if openai_api_key:\n",
" print(\"OpenAI API Key found.\")\n",
" openai_available_llama = True\n",
"else:\n",
" print(\"OpenAI API Key not found. OpenAI related examples will be skipped.\")\n",
" openai_available_llama = False\n",
"\n",
"# 可以手动设置 Key,但不推荐\n",
"# import openai\n",
"# openai.api_key = \"sk-...\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. 数据加载 (SimpleDirectoryReader)\n",
"\n",
"LlamaIndex 提供了多种 `Reader` (或称 `Loader`) 来加载不同来源的数据。`SimpleDirectoryReader` 是最常用的之一,它可以加载一个目录下所有支持的文件 (如 .txt, .pdf, .docx)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from llama_index import SimpleDirectoryReader\n",
"\n",
"print(\"--- Data Loading --- \")\n",
"\n",
"# 1. 创建一个包含示例文件的目录\n",
"data_dir = \"llamaindex_data\"\n",
"os.makedirs(data_dir, exist_ok=True)\n",
"\n",
"file1_path = os.path.join(data_dir, \"file1.txt\")\n",
"file2_path = os.path.join(data_dir, \"file2.md\")\n",
"\n",
"with open(file1_path, \"w\") as f:\n",
" f.write(\"The first document discusses apples and oranges. Apples are red or green, oranges are orange.\")\n",
"with open(file2_path, \"w\") as f:\n",
" f.write(\"# Fruits Summary\\n\\nBananas are yellow and curved. Grapes grow in bunches.\")\n",
"print(f\"Created sample files in '{data_dir}' directory.\")\n",
"\n",
"# 2. 使用 SimpleDirectoryReader 加载数据\n",
"try:\n",
" reader = SimpleDirectoryReader(data_dir)\n",
" documents = reader.load_data()\n",
" print(f\"\\nLoaded {len(documents)} documents.\")\n",
" \n",
" # Document 对象包含 text 和 metadata\n",
" print(\"\\nContent of the first document:\")\n",
" print(f\" Text: {documents[0].text}\")\n",
" print(f\" Metadata: {documents[0].metadata}\")\n",
" \n",
" print(\"\\nContent of the second document:\")\n",
" print(f\" Text: {documents[1].text}\")\n",
" print(f\" Metadata: {documents[1].metadata}\")\n",
" \n",
" documents_available = True\n",
"\n",
"except Exception as e:\n",
" print(f\"Error loading documents: {e}\")\n",
" documents_available = False"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. 文档解析与节点创建 (NodeParser)\n",
"\n",
"加载的 `Document` 对象通常需要被分割成更小的 `Node` 对象(文本块),以便于 Embedding 和检索。LlamaIndex 提供了 `NodeParser` (例如 `SentenceSplitter`) 来完成这个任务。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from llama_index.node_parser import SimpleNodeParser, SentenceSplitter\n",
"from llama_index.schema import Document # Import Document schema if needed for type hints\n",
"\n",
"print(\"\\n--- Node Parsing --- \")\n",
"nodes = []\n",
"if documents_available:\n",
" # 使用 SentenceSplitter (更推荐,考虑句子边界)\n",
" # chunk_size: 每个块的目标大小\n",
" # chunk_overlap: 块之间的重叠\n",
" node_parser = SentenceSplitter(chunk_size=64, chunk_overlap=10)\n",
" \n",
" # 从 Documents 生成 Nodes\n",
" nodes = node_parser.get_nodes_from_documents(documents)\n",
" \n",
" print(f\"Split documents into {len(nodes)} Nodes.\")\n",
" print(\"\\nExample Nodes:\")\n",
" for i, node in enumerate(nodes[:3]): # Display first few nodes\n",
" print(f\"--- Node {i+1} ---\")\n",
" print(f\" Text: {node.get_content().replace('\\n', ' ')}\") # node.text is deprecated, use get_content()\n",
" print(f\" Node ID: {node.node_id}\")\n",
" print(f\" Metadata: {node.metadata}\") # Metadata includes original filename\n",
" # print(f\" Relationships: {node.relationships}\") # Shows links to other nodes/doc\n",
"else:\n",
" print(\"Documents not loaded, skipping node parsing.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. 服务上下文 (ServiceContext) 与模型配置\n",
"\n",
"**(注意:在 LlamaIndex 的较新版本 (>=0.10) 中,`ServiceContext` 已被弃用,推荐直接将 LLM 和 Embedding 模型传递给索引或查询引擎,或者使用全局设置 `Settings`。)**\n",
"\n",
"为了演示配置,我们将使用新的 `Settings` 方法。\n",
"\n",
"`Settings` 用于全局配置 LlamaIndex 使用的 LLM、Embedding 模型、NodeParser、CallbackManager 等。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from llama_index.llms import OpenAI\n",
"from llama_index.embeddings import OpenAIEmbedding\n",
"# from llama_index.llms import HuggingFaceLLM\n",
"# from llama_index.embeddings import HuggingFaceEmbedding\n",
"from llama_index import ServiceContext, Settings # Import Settings\n",
"import torch # Check torch availability for potential HuggingFace models\n",
"\n",
"print(\"\\n--- Configuring Models (using Settings) ---\")\n",
"\n",
"# --- 配置 LLM --- \n",
"llm = None\n",
"if openai_available_llama:\n",
" try:\n",
" # 使用 OpenAI GPT-3.5 Turbo (默认)\n",
" llm = OpenAI(model=\"gpt-3.5-turbo\", temperature=0.1)\n",
" print(\"OpenAI LLM (gpt-3.5-turbo) configured.\")\n",
" except Exception as e:\n",
" print(f\"Error configuring OpenAI LLM: {e}\")\n",
"# else:\n",
"# # 尝试配置 Hugging Face LLM (如果需要且已安装)\n",
"# try:\n",
"# # 注意: 可能需要登录 Hugging Face Hub (huggingface-cli login)\n",
"# # 并且需要安装 accelerate: pip install accelerate\n",
"# # 选择一个合适的模型, e.g., \"google/flan-t5-small\"\n",
"# # llm = HuggingFaceLLM(model_name=\"google/flan-t5-small\", device_map=\"auto\")\n",
"# # print(\"HuggingFace LLM configured.\")\n",
"# pass # Keep commented out for simplicity\n",
"# except Exception as e:\n",
"# print(f\"Could not configure HuggingFace LLM: {e}\")\n",
"\n",
"# --- 配置 Embedding 模型 --- \n",
"embed_model = None\n",
"if openai_available_llama:\n",
" try:\n",
" embed_model = OpenAIEmbedding()\n",
" print(\"OpenAI Embedding model configured.\")\n",
" except Exception as e:\n",
" print(f\"Error configuring OpenAI Embedding: {e}\")\n",
"# else:\n",
"# # 尝试配置 Hugging Face Embedding 模型\n",
"# try:\n",
"# # model_name = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
"# # embed_model = HuggingFaceEmbedding(model_name=model_name)\n",
"# # print(f\"HuggingFace Embedding model '{model_name}' configured.\")\n",
" pass # Keep commented out\n",
"# except Exception as e:\n",
"# print(f\"Could not configure HuggingFace Embedding: {e}\")\n",
"\n",
"# --- 全局设置 (新方法 >= 0.10) ---\n",
"if llm: Settings.llm = llm\n",
"if embed_model: Settings.embed_model = embed_model\n",
"if 'node_parser' in locals() and node_parser: Settings.node_parser = node_parser\n",
"# Settings.chunk_size = 512 # Can also set chunk size globally\n",
"\n",
"print(\"\\nGlobal Settings configured (if models were available):\")\n",
"print(f\" LLM: {Settings.llm}\")\n",
"print(f\" Embed Model: {Settings.embed_model}\")\n",
"\n",
"# --- 旧方法: ServiceContext (了解即可) ---\n",
"# service_context = ServiceContext.from_defaults(\n",
"# llm=llm, \n",
"# embed_model=embed_model, \n",
"# node_parser=node_parser # Can pass parser here too\n",
"# )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. 构建索引 (VectorStoreIndex)\n",
"\n",
"将处理好的 `Nodes` 转换为索引,以便快速检索。\n",
"`VectorStoreIndex` 是最常用的索引类型,它会为每个 Node 生成 embedding 向量,并存储在一个向量数据库中 (默认是内存中的简单实现,也可以配置为 FAISS, Chroma, Pinecone 等)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from llama_index import VectorStoreIndex\n",
"\n",
"print(\"\\n--- Building Index --- \")\n",
"index = None\n",
"# 确保有 nodes 并且 embedding 模型已配置\n",
"if nodes and Settings.embed_model:\n",
" try:\n",
" # 从 Nodes 构建索引,LlamaIndex 会使用 Settings 中配置的模型\n",
" index = VectorStoreIndex(nodes)\n",
" # 或者使用旧方法: index = VectorStoreIndex.from_documents(documents, service_context=service_context)\n",
" print(\"VectorStoreIndex built successfully.\")\n",
" \n",
" except Exception as e:\n",
" print(f\"Error building index: {e}\")\n",
" print(\"This might be due to issues with the embedding model or data.\")\n",
"else:\n",
" print(\"Skipping index building (Nodes or Embed model unavailable).\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. 创建查询引擎 (Query Engine)\n",
"\n",
"查询引擎封装了从索引进行查询并生成响应的逻辑。\n",
"`index.as_query_engine()` 是创建查询引擎的最简单方式。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"\\n--- Creating Query Engine --- \")\n",
"query_engine = None\n",
"if index and Settings.llm: # Need index and LLM\n",
" try:\n",
" # similarity_top_k 控制检索多少个最相关的节点\n",
" query_engine = index.as_query_engine(similarity_top_k=2)\n",
" print(\"Query engine created.\")\n",
" except Exception as e:\n",
" print(f\"Error creating query engine: {e}\")\n",
"else:\n",
" print(\"Skipping query engine creation (Index or LLM unavailable).\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. 进行查询与获取响应\n",
"\n",
"使用查询引擎的 `.query()` 方法。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"\\n--- Querying --- \")\n",
"\n",
"if query_engine:\n",
" query1 = \"What colors can apples be?\"\n",
" print(f\"Query 1: {query1}\")\n",
" try:\n",
" response1 = query_engine.query(query1)\n",
" print(f\"Response 1: {response1}\")\n",
" # 查看检索到的源节点 (用于调试或引用)\n",
" print(\"\\nSource Nodes for Response 1:\")\n",
" for node in response1.source_nodes:\n",
" print(f\" - Score: {node.score:.4f}, Text: {node.node.get_content().replace('\\n',' ')}\")\n",
"\n",
" except Exception as e:\n",
" print(f\"Error during query 1: {e}\")\n",
" \n",
" query2 = \"Tell me about bananas.\"\n",
" print(f\"\\nQuery 2: {query2}\")\n",
" try:\n",
" response2 = query_engine.query(query2)\n",
" print(f\"Response 2: {response2}\")\n",
" print(\"\\nSource Nodes for Response 2:\")\n",
" for node in response2.source_nodes:\n",
" print(f\" - Score: {node.score:.4f}, Text: {node.node.get_content().replace('\\n',' ')}\")\n",
"\n",
" except Exception as e:\n",
" print(f\"Error during query 2: {e}\")\n",
"else:\n",
" print(\"Query engine not available, skipping querying example.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. (简介) 创建聊天引擎 (Chat Engine)\n",
"\n",
"如果你想构建一个基于索引数据的对话机器人(可以记住之前的对话内容),可以使用 `index.as_chat_engine()`。\n",
"\n",
"**模式**: \n",
"* `condense_question`: 先将对话历史和新问题压缩成一个独立的问题,然后查询索引。\n",
"* `context`: 在每轮对话中都检索上下文,并将历史记录和新检索到的上下文一起提供给 LLM。\n",
"* `react`: 使用 ReAct 框架让 LLM 决定是查询索引还是直接回答。\n",
"\n",
"```python\n",
"# if index and Settings.llm:\n",
"# chat_engine = index.as_chat_engine(\n",
"# chat_mode=\"condense_question\", \n",
"# verbose=True\n",
"# )\n",
" \n",
"# response_chat1 = chat_engine.chat(\"What fruits were discussed?\")\n",
"# print(response_chat1)\n",
" \n",
"# response_chat2 = chat_engine.chat(\"Tell me more about the red ones.\") # Should use context\n",
"# print(response_chat2)\n",
" \n",
"# # chat_engine.reset() # Reset chat history\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. (简介) 保存与加载索引\n",
"\n",
"构建索引(特别是包含 embedding)可能比较耗时。LlamaIndex 允许你将索引的存储(包括向量存储)持久化到磁盘,以便后续快速加载。\n",
"\n",
"```python\n",
"# from llama_index import StorageContext, load_index_from_storage\n",
"\n",
"# # --- 保存 --- \n",
"# index_persist_dir = \"./saved_llama_index\"\n",
"# if index:\n",
"# index.storage_context.persist(persist_dir=index_persist_dir)\n",
"# print(f\"Index saved to {index_persist_dir}\")\n",
"\n",
"# # --- 加载 --- \n",
"# # 需要确保 Settings (LLM, Embed model) 仍然配置正确\n",
"# try:\n",
"# storage_context = StorageContext.from_defaults(persist_dir=index_persist_dir)\n",
"# loaded_index = load_index_from_storage(storage_context)\n",
"# print(f\"Index loaded successfully from {index_persist_dir}\")\n",
"# # loaded_query_engine = loaded_index.as_query_engine()\n",
"# # response = loaded_query_engine.query(\"What are apples like?\")\n",
"# # print(response)\n",
"# except FileNotFoundError:\n",
"# print(f\"Index directory {index_persist_dir} not found.\")\n",
"# except Exception as e:\n",
"# print(f\"Error loading index: {e}\")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 总结\n",
"\n",
"LlamaIndex 是一个强大的数据框架,专门用于构建基于 LLM 的 RAG 应用程序。它提供了灵活的数据加载、解析、索引和查询机制。\n",
"\n",
"**关键要点:**\n",
"* 核心流程: **Load -> Parse (Nodes) -> Embed -> Index -> Query**。\n",
"* `SimpleDirectoryReader` 用于方便地加载本地文件。\n",
"* `SentenceSplitter` (或类似 NodeParser) 用于将文档分割成 Nodes。\n",
"* 通过 `Settings` (或旧的 `ServiceContext`) 配置 LLM 和 Embedding 模型。\n",
"* `VectorStoreIndex` 是最常用的索引类型,用于高效的相似性搜索。\n",
"* `index.as_query_engine()` 创建用于问答的查询引擎。\n",
"* `index.as_chat_engine()` 创建用于对话的应用。\n",
"* 支持索引的持久化存储和加载。\n",
"\n",
"对于需要将外部知识库或私有数据与 LLM 结合以提供更准确、更有依据的回答的应用场景,LlamaIndex 是一个非常有价值的工具。它与 LangChain 可以互补使用,LangChain 更侧重于链和 Agent 的编排,而 LlamaIndex 在 RAG 的数据处理和查询方面更深入。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# --- Final Cleanup --- \n",
"import shutil\n",
"if 'data_dir' in locals() and os.path.exists(data_dir):\n",
" try:\n",
" shutil.rmtree(data_dir)\n",
" print(f\"Cleaned up directory: {data_dir}\")\n",
" except Exception as e:\n",
" print(f\"Error cleaning up directory {data_dir}: {e}\")\n",
"\n",
"# index_persist_dir = \"./saved_llama_index\"\n",
"# if os.path.exists(index_persist_dir):\n",
"# try:\n",
"# shutil.rmtree(index_persist_dir)\n",
"# print(f\"Cleaned up directory: {index_persist_dir}\")\n",
"# except Exception as e:\n",
"# print(f\"Error cleaning up directory {index_persist_dir}: {e}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 5
}
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@KuRRe8
Copy link
Author

KuRRe8 commented May 8, 2025

返回顶部

有见解,有问题,或者单纯想盖楼灌水,都可以在这里发表!

因为文档比较多,有时候渲染不出来ipynb是浏览器性能的问题,刷新即可

或者git clone到本地来阅读

ChatGPT Image May 9, 2025, 04_45_04 AM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment