Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save pgolding/571aa64072d4c3d9304ee034cdcc7487 to your computer and use it in GitHub Desktop.
Save pgolding/571aa64072d4c3d9304ee034cdcc7487 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Langchain Self Query Deconstruction (With ChromaDB)\n",
"\n",
"Let's reverse engineer the Langchain [self query](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query) feature to see how it works under the hood. This is to provide some educational insights more generally into how we go from a natural language query to a structured query that can be issued against a vector store, like [ChromaDB](https://docs.trychroma.com/usage-guide#querying-a-collection).\n",
"\n",
"In other words, against a Chroma collection of movies, we might want to go from something like:\n",
"\n",
"`Any cartoons about toys with more than 8.5 stars?`\n",
"\n",
"To something like this:\n",
"\n",
"`('toys', {'filter': {'$and': [{'genre': {'$eq': 'animated'}}, {'rating': {'$gt': 8.5}}]}})`\n",
"\n",
"Knowing that there is a Chroma [metadata_field](https://docs.trychroma.com/usage-guide#filtering-by-metadata) called `genre` with a possible enumeration of `animated` and that there is a numerical `rating` field. This structured query can then be transformed into an API call to Chroma (having been parsed against allowable query operations and operators).\n",
"\n",
"![](https://python.langchain.com/assets/images/self_querying-26ac0fc8692e85bc3cd9b8640509404f.jpg)\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import chromadb\n",
"import os\n",
"import subprocess\n",
"from dotenv import load_dotenv\n",
"\n",
"# Load environment variables (e.g. OpenAI key) from local .env\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the example code given by [Langchain](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query#creating-our-self-querying-retriever)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"openai = OpenAIEmbeddings(model=\"text-embedding-3-small\")\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\",\n",
" metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": \"science fiction\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\",\n",
" metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2},\n",
" ),\n",
" Document(\n",
" page_content=\"A psychologist / detective gets lost in a series of dreams within dreams \\\n",
" within dreams and Inception reused the idea\",\n",
" metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6},\n",
" ),\n",
" Document(\n",
" page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\",\n",
" metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3},\n",
" ),\n",
" Document(\n",
" page_content=\"Toys come alive and have a blast doing so\",\n",
" metadata={\"year\": 1995, \"genre\": \"animated\"},\n",
" ),\n",
" Document(\n",
" page_content=\"Three men walk into the Zone, three men walk out of the Zone\",\n",
" metadata={\n",
" \"year\": 1979,\n",
" \"director\": \"Andrei Tarkovsky\",\n",
" \"genre\": \"thriller\",\n",
" \"rating\": 9.9,\n",
" },\n",
" ),\n",
"]\n",
"vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())\n",
"\n",
"# This is how you would load a collection:\n",
"# vectorstore = Chroma(\"movies\", OpenAIEmbeddings())\n",
"\n",
"\n",
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"genre\",\n",
" description=\"The genre of the movie. One of ['science fiction', 'comedy', 'drama', \"\n",
" \"'thriller', 'romance', 'action', 'animated']\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"year\",\n",
" description=\"The year the movie was released\",\n",
" type=\"integer\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"director\",\n",
" description=\"The name of the movie director\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"rating\", description=\"A 1-10 rating for the movie\", type=\"float\"\n",
" ),\n",
"] \n",
"\n",
"document_content_description = \"Brief summary of a movie\"\n",
"llm = ChatOpenAI(temperature=0)\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
" metadata_field_info,\n",
")\n",
"\n",
"from langchain.chains.query_constructor.base import (\n",
" StructuredQueryOutputParser,\n",
" get_query_constructor_prompt,\n",
")\n",
"\n",
"prompt = get_query_constructor_prompt(\n",
" document_content_description,\n",
" metadata_field_info,\n",
")\n",
"output_parser = StructuredQueryOutputParser.from_components()\n",
"query_constructor = prompt | llm | output_parser"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(None, None)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_prompt = prompt.format(query=\"What can I watch with more than 8.5 stars?\")\n",
"\n",
"\n",
"# Copy the formatted prompt to the clipboard using pbcopy (Mac OS) if you want to paste it into the playground\n",
"process = subprocess.Popen('pbcopy', env={'LANG': 'en_US.UTF-8'}, stdin=subprocess.PIPE)\n",
"process.communicate(test_prompt.encode('utf-8'))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The Prompt: Asking the LLM to Build the Query\n",
"\n",
"Your goal is to incorporate the user's NL query and information about the vector store collection (e.g. its meta fields) in order to generate a prompt like the one below, which is asking the LLM to convert the NL query into a store-independent query language (designed by the Langchain folks as a suitable internal representation of queries).\n",
"\n",
"## Prompt prefix\n",
"\n",
"The first part (prefix) (sometimes called \"the context\") is telling the LLM about an independent query language and how it might be constructed, suggesting an output format of json.\n",
"\n",
"````text\n",
"<< Structured Request Schema >>\n",
"When responding use a markdown code snippet with a JSON object formatted in the following schema:\n",
"\n",
"```json\n",
"{\n",
" \"query\": string \\ text string to compare to document contents\n",
" \"filter\": string \\ logical condition statement for filtering documents\n",
"}\n",
"```\n",
"\n",
"The query string should contain only text that is expected to match the contents of documents. \n",
"Any conditions in the filter should not be mentioned in the query as well.\n",
"\n",
"A logical condition statement is composed of one or more comparison and logical operation statements.\n",
"\n",
"A comparison statement takes the form: `comp(attr, val)`:\n",
"- `comp` (eq | ne | gt | gte | lt | lte | contain | like | in | nin): comparator\n",
"- `attr` (string): name of attribute to apply the comparison to\n",
"- `val` (string): is the comparison value\n",
"\n",
"A logical operation statement takes the form `op(statement1, statement2, ...)`:\n",
"- `op` (and | or | not): logical operator\n",
"- `statement1`, `statement2`, ... (comparison statements or logical operation statements): \n",
"one or more statements to apply the operation to\n",
"\n",
"Make sure that you only use the comparators and logical operators listed above and no others.\n",
"Make sure that filters only refer to attributes that exist in the data source.\n",
"Make sure that filters only use the attributed names with its function names if there are functions applied on them.\n",
"Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values.\n",
"Make sure that filters take into account the descriptions of attributes and only make comparisons \n",
"that are feasible given the type of data being stored.\n",
"Make sure that filters are only used as needed. If there are no filters that should be \n",
"applied return \"NO_FILTER\" for the filter value.\n",
"````\n",
"\n",
"\n",
"## Prompt examples (for few-shot method)\n",
"\n",
"Now we have two examples as part of a few-shot prompt template. These are canned examples that already live in the Langchain code base:\n",
"\n",
"\n",
"````text\n",
"<< Example 1. >>\n",
"Data Source:\n",
"```json\n",
"{\n",
" \"content\": \"Lyrics of a song\",\n",
" \"attributes\": {\n",
" \"artist\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"Name of the song artist\"\n",
" },\n",
" \"length\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"Length of the song in seconds\"\n",
" },\n",
" \"genre\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The song genre, one of \"pop\", \"rock\" or \"rap\"\"\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"User Query:\n",
"What are songs by Taylor Swift or Katy Perry about teenage romance under \n",
"3 minutes long in the dance pop genre\n",
"\n",
"Structured Request:\n",
"```json\n",
"{\n",
" \"query\": \"teenager love\",\n",
" \"filter\": \"and(or(eq(\\\"artist\\\", \\\"Taylor Swift\\\"), eq(\\\"artist\\\", \\\"Katy Perry\\\")), lt(\\\"length\\\", 180), eq(\\\"genre\\\", \\\"pop\\\"))\"\n",
"}\n",
"```\n",
"\n",
"\n",
"<< Example 2. >>\n",
"Data Source:\n",
"```json\n",
"{\n",
" \"content\": \"Lyrics of a song\",\n",
" \"attributes\": {\n",
" \"artist\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"Name of the song artist\"\n",
" },\n",
" \"length\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"Length of the song in seconds\"\n",
" },\n",
" \"genre\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The song genre, one of \"pop\", \"rock\" or \"rap\"\"\n",
" }\n",
" }\n",
"}\n",
"```\n",
"\n",
"User Query:\n",
"What are songs that were not published on Spotify\n",
"\n",
"Structured Request:\n",
"```json\n",
"{\n",
" \"query\": \"\",\n",
" \"filter\": \"NO_FILTER\"\n",
"}\n",
"```\n",
"````\n",
"\n",
"## Prompt suffix\n",
"\n",
"And this is the suffix that gets appended to the above query in order to give an example that is specifically related to the user's NL query:\n",
"\n",
"````text\n",
"<< Example 3. >>\n",
"Data Source:\n",
"```json\n",
"{\n",
" \"content\": \"Brief summary of a movie\",\n",
" \"attributes\": {\n",
" \"genre\": {\n",
" \"description\": \"The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']\",\n",
" \"type\": \"string\"\n",
" },\n",
" \"year\": {\n",
" \"description\": \"The year the movie was released\",\n",
" \"type\": \"integer\"\n",
" },\n",
" \"director\": {\n",
" \"description\": \"The name of the movie director\",\n",
" \"type\": \"string\"\n",
" },\n",
" \"rating\": {\n",
" \"description\": \"A 1-10 rating for the movie\",\n",
" \"type\": \"float\"\n",
" }\n",
"}\n",
"}\n",
"```\n",
"\n",
"User Query:\n",
"What can I watch with more than 8.5 stars?\n",
"\n",
"Structured Request:\n",
"````\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Explanation of prompt\n",
"\n",
"Firstly, where does it come from?\n",
"It comes from the [query_constructor/prompt.py](https://github.com/langchain-ai/langchain/blob/b051bba1a9f3f2c6020d7c8dbcc792d14b3cbe17/libs/langchain/langchain/chains/query_constructor/prompt.py#L4) file.\n",
"\n",
"But the prompt itself, as above, is actually constructed via a `FewShotPromptTemplate` class from `langchain_core.prompts.few_shot` which is of the form:\n",
"\n",
"```\n",
"prefix\n",
"example_prompt(examples) \n",
"suffix(query)\n",
"```\n",
"\n",
"- prefix is the instructions to the LLM on how to formulate the query\n",
"- examples are the canned examples (which come from the prompt.py above)\n",
"- example_prompt is the format of the examples prompt (as populated by examples)\n",
"- suffix is the format of the final example injected with the user's query (query)\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"langchain_core.prompts.few_shot.FewShotPromptTemplate"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"type(prompt)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<< Example 3. >>\n",
"Data Source:\n",
"```json\n",
"{{\n",
" \"content\": \"Brief summary of a movie\",\n",
" \"attributes\": {{\n",
" \"genre\": {{\n",
" \"description\": \"The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']\",\n",
" \"type\": \"string\"\n",
" }},\n",
" \"year\": {{\n",
" \"description\": \"The year the movie was released\",\n",
" \"type\": \"integer\"\n",
" }},\n",
" \"director\": {{\n",
" \"description\": \"The name of the movie director\",\n",
" \"type\": \"string\"\n",
" }},\n",
" \"rating\": {{\n",
" \"description\": \"A 1-10 rating for the movie\",\n",
" \"type\": \"float\"\n",
" }}\n",
"}}\n",
"}}\n",
"```\n",
"\n",
"User Query:\n",
"{query}\n",
"\n",
"Structured Request:\n",
"\n"
]
}
],
"source": [
"# let's look at the suffix in particular:\n",
"print(prompt.suffix)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"The prompt will be used to ask an LLM to generate a structure request, which is a string of the form:\n",
"\n",
"```text\n",
"{\n",
" \"query\": string \\ text string to compare to document contents\n",
" \"filter\": string \\ logical condition statement for filtering documents\n",
"}\n",
"```\n",
"\n",
"This will be used by langchain to create a `StructuredQuery` object:\n",
"\n",
"```python\n",
"class StructuredQuery(Expr):\n",
" \"\"\"A structured query.\"\"\"\n",
"\n",
" query: str\n",
" \"\"\"Query string.\"\"\"\n",
" filter: Optional[FilterDirective]\n",
" \"\"\"Filtering expression.\"\"\"\n",
" limit: Optional[int]\n",
" \"\"\"Limit on the number of results.\"\"\"\n",
"```\n",
"\n",
"This is an internal representation (IR) of a query. This IR is a way for langchain to represent queries (to a vector store) in a store-independent format.\n",
"\n",
"The `query` is a string, which nominally will be the user's natural language query (i.e. a question like `What can I watch with more than 8.5 stars?`).\n",
"\n",
"The `filter` has type `FilterDirective`\n",
"\n",
"```python\n",
"class FilterDirective(Expr, ABC):\n",
" \"\"\"A filtering expression.\"\"\"\n",
"```\n",
"\n",
"Where `Expr` implements a base class for all expressions (e.g. `eq(\\\"artist\\\", \\\"Katy Perry\\\")`) and which abides by the so-called [Visitor pattern](), hence the `Expr` class has an `accept` method to indicate which kinds of visitors it will accept. \n",
"\n",
"The `Visitor` abstract class offers an interface that declares a method for each type of object that the visitor can visit, such as: `visit_structured_query` which is of relevance here as this is the method that will translate a `StructuredQuery` into a Chroma query eventually.\n",
"\n",
"Overall, this code provides a framework for representing \"SQL-like\" queries in Python code using a structured approach, facilitating operations like filtering, logical operations, and query structure representation.\n",
"\n",
"## Prompt Response\n",
"\n",
"So, from the above we can see that __we're expecting the LLM to do all the work in identifying a store-independent query based upon the examples given in a few-shot prompt__.\n",
"\n",
"If we feed the above question `What can I watch with more than 8.5 stars?` into the OpenAI playground, we get the response:\n",
"\n",
"````text\n",
"```json\n",
"{\n",
" \"query\": \"\",\n",
" \"filter\": \"gt(\\\"rating\\\", 8.5)\"\n",
"}\n",
"```\n",
"````\n",
"\n",
"\n",
"Notice, per the instructions in the prompt `When responding use a markdown code snippet with a JSON object` the output is indeed a markdown JSON snippet.\n",
"\n",
"\n",
"See [playground example](https://platform.openai.com/playground/p/13qkzZeKhiTxFAbLhGSYzhRV?model=gpt-3.5-turbo-0125&mode=chat)\n",
"\n",
"This is what we expect, as we are asking for any doc with more than 8.5 stars, but otherwise no specific content.\n",
"\n",
"Note that I then asked GPT for a non-empty query and it suggest: \n",
"\n",
"\n",
"````text\n",
"User Query: \n",
"Can you recommend dramas released after 2010 with a rating above 7.5?\n",
"\n",
"Structured Request:\n",
"\n",
"```json\n",
"{\n",
" \"query\": \"dramas\",\n",
" \"filter\": \"and(gt(\\\"year\\\", 2010), gt(\\\"rating\\\", 7.5))\"\n",
"}\n",
"```\n",
"````\n",
"\n",
"This query is wrong!!! Ideally, it should have added `genre` to the filter (set to `drama`) and not put `drama` into the query string itself. This shows one of the challenges of using this method -- the LLM is going to do all the interpretive work in generating a query and it can hallucinate.\n",
"\n",
"Let's now look at the anatomy of a query object:\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.self_query.chroma import ChromaTranslator\n",
"\n",
"retriever = SelfQueryRetriever(\n",
" query_constructor=query_constructor,\n",
" vectorstore=vectorstore,\n",
" structured_query_translator=ChromaTranslator(),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019})]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This example specifies a query and a filter\n",
"retriever.invoke(\"Has Greta Gerwig directed any movies about women\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imaginary Meta Fields\n",
"\n",
"What happens via the standard API if you introduce something that looks like it might involve meta data that does not exist in the `meta-info` dict -- e.g. \"cost\" or \"price\"..."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"cost_prompt = prompt.invoke(input={\"query\":\\\n",
" \"Can you recommend a drama film with a rating above 7.5 and that costs less then 10 dollars?\"})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"```json\n",
"{\n",
" \"query\": \"drama film\",\n",
" \"filter\": \"and(eq(\\\"genre\\\", \\\"drama\\\"), gt(\\\"rating\\\", 7.5), lt(\\\"cost\\\", 10))\"\n",
"}\n",
"```\n"
]
}
],
"source": [
"print(llm.invoke(cost_prompt).content)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that it has generated a field `cost` that does not exist in the `meta-info`.\n",
"\n",
"\n",
"# How does the self-query chain work?\n",
"\n",
"This line of code from above is going to do all the work in running a query against the store (run a chain):\n",
"\n",
"`query_constructor = prompt | llm | output_parser`\n",
"\n",
"Note that the logical OR operator `|` is overridden here to mimic the unix pipe. In other words, this is a chain.\n",
"\n",
"Let's walk through running the chain, call by call, to see how it works."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"prompt_obj = prompt.invoke(input={\"query\":\"Any cartoons about toys with more than 8.5 stars?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We don't print out the prompt here, but it's similar to the example above."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"llm_gen = llm.invoke(prompt_obj)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"```json\n",
"{\n",
" \"query\": \"toys\",\n",
" \"filter\": \"and(eq(\\\"genre\\\", \\\"animated\\\"), gt(\\\"rating\\\", 8.5))\"\n",
"}\n",
"```\n"
]
}
],
"source": [
"print(llm_gen.content)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"StructuredQuery(query='toys', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]), limit=None)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"struct_query = output_parser.parse(llm_gen.content)\n",
"struct_query"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"('toys', {'filter': {'$and': [{'genre': {'$eq': 'animated'}}, {'rating': {'$gt': 8.5}}]}})\n"
]
}
],
"source": [
"chroma_query = retriever.structured_query_translator.visit_structured_query(struct_query)\n",
"print(chroma_query)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995})]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.vectorstore.search(chroma_query[0], search_type=retriever.search_type, k=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Flow Summary\n",
"\n",
"The above code steps are summarized diagrammatically:\n",
"\n",
"![self-query-flow](https://gist.github.com/assets/28526/27e99b0d-d4b6-4614-94fe-a6ada2ec5cdb)\n",
"\n",
"Of course, you can override the prompt with your own example -- i.e. to change the `examples` part in order to see if that makes a difference to your use case. However, the generic examples are a guide in lieu of the `prefix` that describes the store-independent query language."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:langchain]",
"language": "python",
"name": "conda-env-langchain-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment