Hi everyone ! I was tired of having bad output from my LLMs so I was looking for a way to improve its code.
Here is the trick :
You'll need :
Hi everyone ! I was tired of having bad output from my LLMs so I was looking for a way to improve its code.
Here is the trick :
You'll need :
# Configuration for mcp-mcpdoc server | |
# Each entry must have a llms_txt URL and optionally a name | |
- name: n8n | |
llms_txt: https://raw.githubusercontent.com/YoanAncelly/unofficial-llmstxt/refs/heads/main/llmstxt/n8n/llms.txt |
{ | |
"mcpServers": { | |
"langgraph-docs-mcp": { | |
"command": "uvx", | |
"args": [ | |
"--from", | |
"mcpdoc", | |
"mcpdoc", | |
"--yaml", | |
"llms/llmstxt_config.yaml", | |
"--allowed-domains", | |
"https://docs.n8n.io", | |
"--transport", | |
"stdio" | |
], | |
"alwaysAllow": [ | |
"list_doc_sources", | |
"fetch_docs" | |
] | |
} | |
} | |
} |
for ANY question about a technology in the tech stack below, use the langgraph-docs-mcp server to help answer -- | |
+ call list_doc_sources tool to get the available llms.txt file | |
+ call fetch_docs tool to read it | |
+ reflect on the urls in llms.txt | |
+ reflect on the input question | |
+ call fetch_docs on any urls relevant to the question | |
Tech stack: | |
+ n8n |