"Fragments" are text you can inject into a conversation. You can start a conversation with a fragment with -f <fragment>
or insert one during a chat with !fragment <fragment>
.
The site-text fragment pulls a website as markdown and injects it into the prompt.
$ llm install llm-fragments-site-text
$ llm -f site:https://hinge.co/mission "What's Hinge all about?"
Hinge is an online dating platform focused on fostering meaningful, in-person connections and reducing loneliness in the world. The core mission of Hinge centers around the belief that relationships—both romantic and non-romantic—are foundational to achieving difficult goals and building strong teams.
Hinge operates on three key values:
1. **Love the Problem** - Hinge prioritizes deeply understanding the challenges faced in modern dating, ensuring that solutions are informed and thoughtful.
2. **Keep it Simple** - The platform seeks elegant and effective solutions, focusing on a few essential tasks that align with their overarching goals and saying "no" to distractions.
3. **Decide with Principles** - Decisions at Hinge are made with careful consideration of their guiding principles rather than arbitrary judgment.
Additionally, Hinge emphasizes trust within its team, fostering an environment of transparency and open communication. They work to build and maintain trust, addressing issues when they arise.
Hinge has also published a book titled "How We Do Things," which elaborates on the principles that guide their operations, ranging from hiring practices to team dynamics and goal-setting. Overall, Hinge is dedicated to creating a supportive community that enhances the dating experience and helps individuals forge meaningful connections.
Similarly, the pdf plugin lets you pull in text from pdfs.
Tools are python functions that the llm can choose to call. You provide tools when you start a chat. They can be provided through plugins or passed in as plaintext python code.
exa.ai provides an API for web search and question answering that's specifically designed for LLM usage. The exa plugin lets you provide this as a tool. (You'll need an API key).
~ $ llm install llm-tools-exa
~ $ llm keys set exa
~ $ llm chat -T Exa
Chatting with gpt-4o-mini
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
Type '!edit' to open your default editor and modify the prompt
Type '!fragment <my_fragment> [<another_fragment> ...]' to insert one or more fragments
> Use get_answer to find out who the pope currently is
The current pope is Pope Leo XIV. For more information, you can refer to the following sources:
1. [CourierPostonline](https://www.courierpostonline.com/story/news/local/south-jersey/2025/05/09/is-pope-leo-xiv-conservative-or-liberal-who-is-the-current-pope-vatican-white-smoke-villanova-grad/83535656007)
2. [USCCB](https://www.usccb.org/popes)
3. [NPR](https://www.npr.org/2025/05/08/g-s1-65147/new-pope-leo-xiv-robert-prevost-views)
The RAG plugin lets you do nearest-neighbor searches against embeddings you've created using llm.
~ $ llm install llm-tools-rag
~ $ llm embed-models default ada
~ $ echo "you miss 100% of the shots you don't take" > quote1.txt
~ $ echo "blah blah blah" > quote2.txt
~ $ llm embed-multi quotes --files . '*.txt' --store
Embedding [####################################] 100%
~ $ llm chat -T RAG
> Read me a quote from my db about trying again
Here's a quote about trying again:
"You miss 100% of the shots you don't take."
Would you like to see more quotes or something else?