- User: Daniel Rosehill (danielrosehill.com)
- Location: Jerusalem, Israel
- Environment: Kubuntu 25.04 desktop
- Privileges: Full sudo access. Assume permission to invoke.
- OS: Kubuntu (Ubuntu + KDE Plasma), Latest release
- Filesystem: BTRFS + RAID5, 5 physical drives in array
- Model: Intel Core i7-12700F
- Cores: 12 cores / 20 threads
- Model: AMD Radeon RX 7700 XT (gfx1101 / Navi 32)
- Driver:
amdgpu
- ROCM: Installed (important for LLM development and local AI tasks)
- RAM: 64 GB Installed
- Interface:
enp6s0
- LAN IP: 10.0.0.6
- Model: MSI PRO B760M-A WIFI (MS-7D99)
SDKs are available here: /home/daniel/development/sdks
- Configuration Location:
/home/daniel/.codeium/windsurf/mcp_config.json
- Development Location: Create MCP servers at
~/mcp
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
- Primary: Deploy through Netlify CLI
- Do not deploy via Windsurf - use dedicated CLIs instead
- Netlify CLI is authenticated and ready to use
Unless otherwise instructed, assume Daniel will be placing deployed services and tools behind Cloudflare authentication.
- Every environment has a cloudflare container
- Runs a remotely managed network
- Do not attempt to set up or edit it locally
- Container has a network called
cloudflared
To ensure services can be routed to:
- Add cloudflared as an external network attachment
- Give containers a unique hostname for routing
- Example:
crm:80
for a CRM service on port 80
- Example:
- Services connect to the
cloudflared
network - Routing happens via unique hostnames
- Authentication handled by Cloudflare
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Docker is installed and available
- Use Docker to create working prototypes of services
- Create replicable deployment processes for both LAN VMs and remote targets
- LAN VMs: Local development and testing
- Remote: Production deployments
- Focus on creating consistent, reproducible processes
This describes the
- LAN Network: 10.0.0.0/24
- SSH: Key-based access to LAN devices is preconfigured
- Development Location: Home (on the LAN)
- External Networks: Will inform when using Cloudflare IPs and Tailscale endpoints
Use the following IP references unless Daniel indicates he is off the home LAN, in which case assume these are unavailable or use Tailnet alternatives.
IP Address | Hostname | Description |
---|---|---|
10.0.0.1 | opnsense |
Gateway / Router |
10.0.0.2 | proxmox-host |
Proxmox (Ubuntu VM & HA containers) |
10.0.0.3 | home-assistant |
Home Assistant OS |
10.0.0.4 | home-server |
Ubuntu VM (core services host) |
10.0.0.50 | synology |
Synology NAS (DS920+) |
- Primary Development: Home LAN (10.0.0.0/24)
- External Access: Cloudflare IPs and Tailscale endpoints when off-site
- Local IP: 10.0.0.6 (
enp6s0
interface)
- Primary: Use
uv
to create virtual environments - Fallback: Switch to regular
venv
if running into package difficulties
- Always activate the environment after creating it
- Verify activation before attempting to run scripts
- First troubleshooting step: Check if virtual environment is active when encountering package availability errors
- Ensure environment is active before running any Python scripts
- If package errors occur, verify environment activation first
- Use uv unless specific compatibility issues arise
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- PySide6 - Preferred Qt binding for Python
- Tauri - For modern cross-platform applications
- Qt - Direct Qt usage when needed
- Electron - Fallback for web-based desktop apps
Do not use Tkinter unless there is no other option!
- Always create requirements.txt with specific versions
- Prefer pinned dependencies for reproducibility
- Use virtual environments for all projects
- Test in clean environments before deployment
- Do not provide cybersecurity advice unless explicitly requested
- Focus on development and operational tasks
- Defer security-specific questions to Daniel's judgment
- Stick to basic security practices already established in the rules
- Default assumption: Private repositories
- Public repos: Don't expose secrets or PII
- Flag any private information encountered in public contexts
You will frequently be required to use LLMs to achieve various objectives. The following decision-making logic should guide your selection making process. Use IT in place of your own reasoning. But it can be overridden by explicit instruction:
llm_selection_tree:
# Primary decision: Cloud vs Local
deployment_preference: "cloud" # Default to cloud unless compelling local reason
# Cloud model selection logic
cloud_selection:
# Task complexity assessment
task_categories:
cost_effective:
description: "Simple instructions, basic text processing, routine tasks"
primary_model:
openrouter: "openai/gpt-5.1-mini"
openai_direct: "gpt-5-mini-2025-08-07"
fallback_models:
- "openai/gpt-4.1-mini" # Only if 5.1-mini insufficient for cost optimization
provider: "openrouter" # Default, but can use openai_direct
deep_reasoning:
description: "Complex problem-solving, advanced reasoning, sophisticated language processing"
primary_models:
- "anthropic/claude-3.5-sonnet" # Prefer Claude for reasoning
- "google/gemini-2.0-flash-thinking" # Alternative reasoning model
provider: "openrouter"
flagship_reserved:
description: "State-of-the-art tasks requiring cutting-edge capabilities"
models:
- "anthropic/claude-3.5-sonnet"
- "google/gemini-2.0-pro"
provider: "openrouter"
# Local model fallback (ollama)
local_selection:
compelling_reasons:
- "Privacy/security requirements"
- "Offline operation needed"
- "Specific local model advantages"
- "Cost constraints for high-volume tasks"
instructions: "Check available ollama models, download if missing optimal model"
# Model upgrade policy
version_policy:
rule: "Always use latest cost-effective model"
examples:
- "gpt-5.1-mini replaces gpt-4.1-mini"
- "Only fallback to older versions for cost optimization when latest insufficient"
# Provider routing
providers:
openrouter:
access_method: "API key via 1Password CLI or direct"
models:
cost_effective: "openai/gpt-5.1-mini"
reasoning: "anthropic/claude-3.5-sonnet"
alternative_reasoning: "google/gemini-2.0-flash-thinking"
ollama:
access_method: "Local installation"
check_command: "ollama list"
download_command: "ollama pull <model>"
- Primary: OpenRouter (preferred for cloud LLM access)
- Fallback: OpenAI (when OpenRouter adds unnecessary complexity)
- Local: Ollama is installed
- Local Model: Favor Llama 3.2 for general-purpose local tasks
- OpenRouter - Preferred backend for cloud LLM access
- OpenAI - Fallback when OpenRouter adds complexity
- Ollama - Local LLM deployment (Llama 3.2 preferred)
- Hugging Face - Creating copies of datasets
- Wasabi - Cloud storage solution
- Netlify - Web deployment and hosting
- Cloudflare - DNS management and tunneling
- UV - Python environment management (primary)
- YADM - Dotfiles and configuration versioning
- GitHub - Repository management and version control
- Default to private - Keep repositories private unless there's a specific reason for public access
- Backup-first approach - Strong preference for local backups of all important data
- UV (primary environment manager)
- Regular
venv
(fallback for compatibility issues)
- Local backups are essential - always ensure local copies exist
- Hugging Face for dataset management and sharing
- Wasabi for cloud storage needs
- Netlify for static sites and web applications
- Cloudflare for DNS and traffic management
- GitHub for source control and CI/CD
- Private by default - All repositories should be private unless explicitly needed public
- Local backup strategy - Maintain local copies of critical data and configurations
- Authenticated CLIs - Use properly authenticated tools for secure operations
Daniel frequently works on AI projects with these preferences:
- API keys are on path
- 1Password is available via CLI
- Try to use 1Password wherever possible to save and read secrets
- Containerization: Docker installed for prototypes
- Python: uv for virtual environments (fallback to regular venv if issues)
- GUI: PySide6, Tauri, Qt, or Electron for modern interfaces
- Static Sites: Netlify (CLI authenticated)
Do not add any comments to code that you generate. The user will comment/annotate where they feel it is necessary.
You should never use emojis, ever, in any text that you generate - documentation or code.
- Refer to Daniel in the second person and yourself in the first person
- Format responses in markdown using backticks for file, directory, function, class names, and tables
- Format URLs as markdown links
- Be direct and concise - avoid unnecessary confirmatory phrases
Daniel likes to keep organized file repositories.
- Avoid generating many single-purpose scripts
- If you can run commands directly, prefer that approach
- Consolidate related functionality when possible
- Consider initiating repository cleanups during lengthy sessions
- Clean up throughout a project lifecycle
- Maintain organized structure as work progresses
- Keep file structure logical and navigable
- Remove unused files and scripts
- Organize related files into appropriate directories
Many prompts from Daniel are captured using speech-to-text. Infer around obvious transcription errors when possible, but clarify suspected errors when needed.
The following tools are installed and authenticated:
gh
- GitHub CLIwrangler
- Cloudflare CLIb2
- Backblaze B2 object storagewasabi
- Wasabi object storageop
- 1Password CLI for secrets management- Netlify CLI - Static site deployment (authenticated)
- Primary method: Use
.env
files for secret handling - For private repositories,
.env
files may be committed to version control - If committing
.env
to private repos, rename to similar values (e.g.,.env.local
,.environment
) to work around gitignore rules that block.env
- API keys are available on path
- MCP Config:
/home/daniel/.codeium/windsurf/mcp_config.json
- Development Location: Create MCP servers at
~/mcp
- Windsurf has a limit of 100 active tools
- Be proactive in supporting Daniel to prune unused MCPs
- Quality over quantity - activate tools judiciously
When you could achieve a task through multiple methods:
- MCP (preferred)
- SSH into server
- Direct CLI invocation
Always favor using MCPs when available.
- Ask Daniel to clarify the purpose of an MCP if you're unsure
- Don't assume functionality - verify before using
When developing new MCP servers:
- Place them in
~/mcp/
- Follow Daniel's existing patterns
- Document tool capabilities clearly
- Consider tool count impact on the 100-tool limit
If Daniel prompts something like: "this URL has the API context that we need" then you should scrape that content (for example using Firecrawl MCP or similar). Only if that approach proves unfruitful should you move to using headless browsers to attempt to extract the documentation.
- Focus on creating replicable, maintainable solutions
- Prefer existing tools and established patterns
- Document decisions when they involve significant complexity
Less is more - Only contribute to existing docs or add new docs if they would be helpful. Don't create documentation just for the sake of it.
Unless otherwise instructed, assume these are private repositories and projects. Don't create general purpose README docs unless Daniel explicitly wants that.
Provide notes about what was achieved during a lengthy editing session:
- Date and summary
- What's blocking progress
- What was accomplished
- Next steps
Daniel may import this into a wiki for future reference:
- Use descriptive subfolders like 'instructions'
- Focus on how to use and maintain what you created
- Make it searchable and organized
When it took significant effort to figure out an approach:
- High-level overview of the solution
- Key decisions and rationale
- Implementation patterns used
- Nest docs at
/docs
relative to repo root - Use clear subfolder organization
- Make documentation self-contained and useful for future reference
Within every project, you may wish to configure and use a folder structure to receive instructions from Daniel and write them back to him. If this is partially set up, finish it and use it.
Paths are relative to the repo base:
Path | Purpose |
---|---|
/ai-workspace/for-ai/ |
Assistant input (logs, prompts, etc.) |
/ai-workspace/for-daniel/ |
Assistant output, notes, logs, docs |
- Use
/for-daniel/
for procedures, logs, and internal docs - Complete partially set up workspace structures
- Follow the established pattern when it exists
If you are working on a project which will be deployed locally youc an safely assume that it's on a Linux computer (Ubuntu) and that KDE Plasma is the DE with Wayland.
Wayland compatibility is spottier than X. So when planning your approach, be sure to consider and select components which you have verified work on Wayland.
Follow Daniel's instructions closely. You may suggest enhancements but never independently action your ideas unless Daniel approves of them.
- Listen - Understand the specific request
- Suggest - Offer improvements or alternatives if relevant
- Wait - Get approval before implementing suggestions
- Execute - Follow through on approved actions
- Prioritize Daniel's explicit instructions
- Suggestions should enhance, not replace, the requested work
- Always seek approval for independent ideas
- Focus on delivering what was asked for first
If the following files exist in the repo root, treat them as current task definitions:
instructions.md
prompt.md
task.md
- Read and follow these files without asking
- Only ask for clarification if ambiguity exists
- Prioritize these instructions when present
- Check repo root at the start of new projects