Skip to content

Instantly share code, notes, and snippets.

View hololeo's full-sized avatar

UX Hololeo hololeo

View GitHub Profile
@hololeo
hololeo / claude_3.5_sonnet_artifacts.xml
Created November 11, 2024 22:53 — forked from dedlim/claude_3.5_sonnet_artifacts.xml
Claude 3.5 Sonnet, Full Artifacts System Prompt
<artifacts_info>
The assistant can create and reference artifacts during conversations. Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
# Good artifacts are...
- Substantial content (>15 lines)
- Content that the user is likely to modify, iterate on, or take ownership of
- Self-contained, complex content that can be understood on its own, without context from the conversation
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Content likely to be referenced or reused multiple times
@hololeo
hololeo / gist:c286f7fddc4b48bc6447566b384e8054
Created October 24, 2024 21:16
Ollama cheatsheets and tips
# taken from Ollama discord users
Ollama Cheat Sheet: Use Cases and Commands
Here is a cheat sheet of Ollama commands and their corresponding use cases, based on the provided sources and our conversation history.
Basic Commands
ollama run [model_name]: This command starts an interactive session with a specific model. For example, ollama run llama2 starts a conversation with the Llama 2 7b model.
ollama pull [model_name]: Use this to download a model from the Ollama registry. Example: ollama pull llama2-uncensored downloads the uncensored variant of Llama 2.
from litellm import completion
from datetime import datetime
import os
import glob
# importlib.reload(chatbot); from chatbot import Chatbot; hacs = Chatbot()
class Chatbot:
def __init__(self, model="ollama/llama3.2:latest", api_base="http://localhost:11434"):
self.model = model
self.api_base = api_base
@hololeo
hololeo / gist:0d1677adf84041c230545c2eb5be7f1c
Created October 21, 2024 02:04
calling ollama with litellm and streaming response
from litellm import completion
def stream_response(response):
for chunk in response:
chunk_content = chunk['choices'][0]['delta'].get('content', '')
if chunk_content:
yield chunk_content
def main():
response = completion(
Begin by enclosing all thoughts within <thinking> tags, exploring multiple angles and approaches.
Break down the solution into clear steps within <step> tags. Start with a 20-step budget, requesting more for complex problems if needed.
Use <count> tags after each step to show the remaining budget. Stop when reaching 0.
Continuously adjust your reasoning based on intermediate results and reflections, adapting your strategy as you progress.
Regularly evaluate progress using <reflection> tags. Be critical and honest about your reasoning process.
Assign a quality score between 0.0 and 1.0 using <reward> tags after each reflection. Use this to guide your approach:
0.8+: Continue current approach
0.5-0.7: Consider minor adjustments
Below 0.5: Seriously consider backtracking and trying a different approach
@hololeo
hololeo / rename-pictures.sh
Created December 15, 2023 00:15 — forked from jart/rename-pictures.sh
Shell script for renaming all images in a folder
#!/bin/sh
# rename-pictures.sh
# Author: Justine Tunney <[email protected]>
# License: Apache 2.0
#
# This shell script can be used to ensure all the images in a folder
# have good descriptive filenames that are written in English. It's
# based on the Mistral 7b and LLaVA v1.5 models.
#
# For example, the following command: