Skip to content

Instantly share code, notes, and snippets.

@imaurer
Last active May 28, 2025 14:06
Show Gist options
  • Save imaurer/d52eae62a9ab82a332bf0ec2add46113 to your computer and use it in GitHub Desktop.
Save imaurer/d52eae62a9ab82a332bf0ec2add46113 to your computer and use it in GitHub Desktop.
llm-mcp example setup flow
# Demo output - assume llm 0.26 installed and in path
#
# llm-mcp repo:
# https://github.com/genomoncology/llm-mcp
#
# includes:
# - Desktop Command (local MCP): https://desktopcommander.app/
# - Git MCP for simonw/llm: https://gitmcp.io/simonw/llm (auth-less remote example)
#
# version 0.0.2
# - `llm mcp servers` for add, list, view
# - register_tools converting "MCP" tools to "llm" tools
# - parsing MCP commands (npx, uv, etc.) and urls
# - generating names from command or url
# - prompts verified: local, remote, and multi-server
#
# roadmap to 0.1
# - remote server authentication (tokens, oauth)
# - `llm mcp toolboxes` for create, add/remove tools, list, view
# - register "default" MCP toolbox
# - toolbox: support vanilla python functions, toolbox classes
# - `llm mcp proxy` to start up MCP "proxy server" to a toolbox
# - proxy auth?
% llm install llm-mcp
% llm --version
llm, version 0.26
% llm plugins
[
{
"name": "llm-mcp",
"hooks": [
"register_commands",
"register_tools"
],
"version": "0.0.2"
},
{
"name": "llm-gemini",
"hooks": [
"register_commands",
"register_embedding_models",
"register_models"
],
"version": "0.21"
}
]
# specify --name to override generated server name of gitmcp_llm
# see logic: https://github.com/genomoncology/llm-mcp/blob/main/src/llm_mcp/utils/generate_server_name.py
% llm mcp servers add "https://gitmcp.io/simonw/llm"
✔ added server 'gitmcp_llm' with 4 tools
% llm mcp servers list
gitmcp_llm
% llm mcp servers view gitmcp_llm | head -n 15
{
"name": "gitmcp_llm",
"parameters": {
"url": "https://gitmcp.io/simonw/llm",
"headers": {},
"timeout": 30,
"sse_read_timeout": 300,
"terminate_on_close": true
},
"tools": [
{
"name": "fetch_llm_documentation",
"description": "Fetch entire documentation file from GitHub repository: simonw/llm. Useful for general questions. Always call this tool first if asked about simonw/llm.",
"inputSchema": {},
"annotations": {
% llm -T search_llm_documentation -T fetch_generic_url_content -m gpt-4.1-mini "Search and fetch docs for how to specify a schema in llm project."
The llm project supports specifying a schema for structured JSON output from language models. Here is a summary of how to specify schemas:
1. Concise Schema Syntax (DSL):
- You can specify a schema directly as a string with comma-separated or newline-separated field definitions.
- Example for a simple dog schema with name (string), age (int), and one_sentence_bio (string):
```bash
llm --schema 'name, age int, one_sentence_bio' 'invent a cool dog'
```
... (snipped) ...
These options allow you to specify desired structured output from different supported LLM models (OpenAI, Anthropic, Google Gemini, etc.) that support JSON schema output.
If you want more detailed docs or examples, please ask!
# specify --name to override "calculated name" of desktop_commander
data % llm mcp servers add "npx @wonderwhy-er/desktop-commander"
✔ added server 'desktop_commander' with 18 tools
# kwargs needs to be fixed using llm Tool improvement or signature magic
data % llm tools | grep kw
create_directory(**kwargs: Any) -> Any (plugin: mcp)
edit_block(**kwargs: Any) -> Any (plugin: mcp)
execute_command(**kwargs: Any) -> Any (plugin: mcp)
fetch_generic_url_content(**kwargs: Any) -> Any (plugin: mcp)
fetch_llm_documentation(**kwargs: Any) -> Any (plugin: mcp)
force_terminate(**kwargs: Any) -> Any (plugin: mcp)
get_config(**kwargs: Any) -> Any (plugin: mcp)
get_file_info(**kwargs: Any) -> Any (plugin: mcp)
kill_process(**kwargs: Any) -> Any (plugin: mcp)
list_directory(**kwargs: Any) -> Any (plugin: mcp)
list_processes(**kwargs: Any) -> Any (plugin: mcp)
list_sessions(**kwargs: Any) -> Any (plugin: mcp)
move_file(**kwargs: Any) -> Any (plugin: mcp)
read_file(**kwargs: Any) -> Any (plugin: mcp)
read_multiple_files(**kwargs: Any) -> Any (plugin: mcp)
read_output(**kwargs: Any) -> Any (plugin: mcp)
search_code(**kwargs: Any) -> Any (plugin: mcp)
search_files(**kwargs: Any) -> Any (plugin: mcp)
search_llm_code(**kwargs: Any) -> Any (plugin: mcp)
search_llm_documentation(**kwargs: Any) -> Any (plugin: mcp)
set_config_value(**kwargs: Any) -> Any (plugin: mcp)
write_file(**kwargs: Any) -> Any (plugin: mcp)
# use desktop commander to read a file
# see: https://github.com/genomoncology/llm-mcp/tree/main/tests/data
% cat secret.txt
walrus
% llm -T read_file -m gpt-4.1-nano "What is the secret word in the file secret.txt?"
The secret word in the file secret.txt is "walrus".
# setup the joke
% cat joke.txt
Why don't pelicans like to tip waiters?
# 3 jump example: read a file, search git, and fetch a file
data % llm prompt -m gpt-4.1 -T read_file -T search_llm_code -T fetch_generic_url_content "Read the joke in joke.txt. Then search using that exact joke in the simonw/llm repo, then display the punchline found after fetching the content."
The joke in joke.txt is:
> Why don't pelicans like to tip waiters?
After searching for this exact joke in the simonw/llm repo, I discovered the punchline in both the documentation and the README:
> Because they always have a big bill!
Here's the full Q&A as shown in the LLM repo:
**Q:** Why don't pelicans like to tip waiters?
**A:** Because they always have a big bill!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment