Skip to content

Instantly share code, notes, and snippets.

@daveio
Last active July 28, 2025 11:12
Show Gist options
  • Save daveio/5540dc688224b78dceeb92ec6ccc94e4 to your computer and use it in GitHub Desktop.
Save daveio/5540dc688224b78dceeb92ec6ccc94e4 to your computer and use it in GitHub Desktop.
fuck ai-shell in the bin

ai.fish

Because @builder-io/ai-shell is terrible. Not their fault, really, OpenAI's models just aren't as good.

Dependencies

Tool Notes URL
Claude Code You need it set up and functional. docs.anthropic.com
A JavaScript package manager We try all of them in order, see NOTES oven-sh/bun
gum (optional, recommended) gum is a terminal toolbox for lovely UI glue. charmbracelet/gum

Invocation

ai: Request a prompt with an inline editor, and generate the command. This requires gum.

ai [PROMPT]: Provide the prompt on the command line. You don't need to quote it, but you can if you like.

Parameters

-x / --execute: DANGEROUS: Run the generated command immediately. Tries to catch dangerous commands.

-f / --force: VERY DANGEROUS: Disable enforcement of the guardrails, executing literally anything the model outputs.

Notes

The script will attempt to call Claude Code using these tools in this order:

bundenopnpmyarnnpxclaude directly

Updates

2025-07-28: I've improved the system prompt, and added execution. Now, when you run it, you'll get asked if you want to run the command. If you specify -x, it'll auto-run, for those of you who prefer yolo mode.

22025-07-28: Added a -f / --force flag for bypassing safety checks, and implemented configurable guardrails with a pattern array to catch dangerous commands like rm -rf / and sudo rm. Now ai -x will refuse to run dangerous commands unless you also specify -f / --force for true yolo mode.

function ai --description "AI assistant for generating shell commands"
argparse 'x/execute' 'f/force' -- $argv
or return 1
# Configurable guardrails: patterns to match against potentially dangerous commands
set -l dangerous_patterns \
"*rm *" \
"*chmod -R 777 /*" \
"*> /dev/sd*" \
"*dd if=*of=/dev/*" \
"*mkfs*" \
"*fdisk*" \
"*:(){ :|:& };:*" \
"*curl*|*sh*" \
"*wget*|*sh*"
set -l system_prompt "You must output ONLY a single shell command that accomplishes the requested task. Check the --help for the command, and the man page with 'man foo | cat'. NOTABLE PITFALL: BSD vs. GNU versions of tools, which have the same name but different parameters. Adapt your command if necessary. Do not perform the task yourself. Do not output any explanation, markdown formatting, or multiple lines. Output exactly one executable shell command and nothing else."
set -l ai_output
set -l prompt
if test (count $argv) -eq 0
# No arguments provided, check if gum is available
if not command -q gum
echo "❌ Error: gum is not available for interactive prompts."
echo
echo " Install gum from https://github.com/charmbracelet/gum or"
echo " invoke this function by specifying the prompt directly:"
echo
echo " ai [PROMPT]"
echo
return 1
end
set prompt (gum write --header "Enter your prompt" --placeholder "Type your prompt here..." --width 80 --height 10)
# Check if user cancelled (empty prompt)
if test -z "$prompt"
echo "❌ Cancelled by user"
return 1
end
else
# Arguments provided, use them as the prompt
set prompt "$argv"
end
# Escape quotes and special characters in the prompt to prevent command injection
set prompt (string escape -- $prompt)
echo "🤖 Generating command..."
# Try different package managers to run claude code
if command -q bun
set ai_output (bun x @anthropic-ai/claude-code --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else if command -q deno
set ai_output (deno run -A npm:@anthropic-ai/claude-code --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else if command -q pnpm
set ai_output (pnpm dlx @anthropic-ai/claude-code --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else if command -q yarn
set ai_output (yarn dlx @anthropic-ai/claude-code --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else if command -q npx
set ai_output (npx -y @anthropic-ai/claude-code --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else if command -q claude
set ai_output (claude --append-system-prompt "$system_prompt" -p "$prompt" 2>/dev/null)
else
echo "❌ Error: No suitable package manager or claude command found."
echo " Please install bun, deno, pnpm, yarn, npm, or claude code directly"
return 1
end
# Check if AI command failed or returned empty output
if test $status -ne 0
echo "❌ Error: Failed to communicate with AI service"
return 1
end
if test -z "$ai_output"
echo "❌ Error: AI service returned no output"
return 1
end
# Remove code blocks, trim whitespace, and get the actual command
# Handle various markdown formats and clean up output
set -l command (echo "$ai_output" | sed -E 's/```[a-zA-Z0-9]*//g; s/```//g' | string trim | head -n 1)
# Validate that we got a non-empty command
if test -z "$command"
echo "❌ Error: AI returned empty command after processing"
echo "Raw output: $ai_output"
return 1
end
# Check for potentially dangerous commands using configurable patterns
set -l is_dangerous false
for pattern in $dangerous_patterns
if string match -q "$pattern" "$command"
set is_dangerous true
break
end
end
if set -q _flag_execute
# Execute mode (-x flag specified)
if test "$is_dangerous" = true
if set -q _flag_force
# Both -x and -f specified: skip guardrails and execute
echo "⚠️ FORCE MODE: Executing potentially dangerous command without guardrails"
echo "⚡ Executing: $command"
eval $command
else
# Only -x specified: refuse to execute dangerous command
echo "❌ SAFETY: Potentially destructive command detected, refusing to auto-execute"
echo "🤖 Generated command:"
echo "$command"
echo ""
echo "💡 Use --force (-f) with --execute (-x) to bypass safety checks, or run without --execute for confirmation prompt"
return 1
end
else
# Not dangerous: execute immediately
echo "⚡ Executing: $command"
eval $command
end
else
# Interactive mode (no -x flag): show command and ask for confirmation
echo "🤖 Generated command:"
echo "$command"
# Show warning if dangerous (ignoring -f flag in interactive mode)
if test "$is_dangerous" = true
echo ""
echo "⚠️ WARNING: This command appears potentially destructive!"
end
echo ""
read -l -P "Execute this command? [y/N] " confirm
if test "$confirm" = y -o "$confirm" = Y
echo "⚡ Executing..."
eval $command
else
echo "❌ Execution cancelled"
end
end
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment