Skip to content

Instantly share code, notes, and snippets.

@andraz
Last active April 8, 2025 06:56
Show Gist options
  • Save andraz/ec47418ee68b9f2e16f7fd2fe24224fb to your computer and use it in GitHub Desktop.
Save andraz/ec47418ee68b9f2e16f7fd2fe24224fb to your computer and use it in GitHub Desktop.
dump all for llm
git ls-files '*.js' '*.jsx' '*.ts' '*.tsx' | xargs -I {} sh -c 'awk "{print FILENAME \":\" FNR \":\" \$0}" {}' > all_js_files_dump.txt && code all_js_files_dump.txt

git ls-files '*.js' '*.jsx' '*.ts' '*.tsx' | xargs -I {} sh -c 'awk "{print FILENAME \":\" FNR \":\" \$0}" {}' > fe_js_files_dump.txt && code fe_js_files_dump.txt && explorer .

git ls-files '*.js' '*.jsx' '*.ts' '*.tsx' | xargs -I {} sh -c 'awk "{print FILENAME \":\" FNR \":\" \$0}" {}' > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .

awk 'BEGIN {print "Directory Listing:"; system("git ls-files \"*.js\" \"*.jsx\" | grep -v \".test\\.\" | grep -v \"^test-\""); print "\nFiles:"} FNR==1 {print "\nFile: " FILENAME} {print FNR ": " $0}' $(git ls-files '*.js' '*.jsx' | grep -v '.test\.' | grep -v '^test-') > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .

# BE without test files
PREFIX=$(git rev-parse --show-prefix) && RELATIVE_FILES=$(git ls-files -- '*.js' '*.jsx' . | grep -v -e '\.test\.' -e '^test-') && if [ -n "$RELATIVE_FILES" ]; then   FULL_PATH_FILES=$(echo "$RELATIVE_FILES" | while IFS= read -r file; do echo "${PREFIX}${file}"; done) &&   (echo "Files found in Current Directory Tree ($PREFIX):" &&    printf '%s\n' "$FULL_PATH_FILES" &&    echo "" &&    awk -v prefix="$PREFIX" 'FNR==1 {print "\nFile: " prefix FILENAME} {print FNR ": " $0}' $RELATIVE_FILES   ) > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .; else   echo "No non-test JS/JSX files found in current directory tree ($PREFIX)" > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .; fi

# with test files
PREFIX=$(git rev-parse --show-prefix) && RELATIVE_FILES=$(git ls-files -- '*.js' '*.jsx') && if [ -n "$RELATIVE_FILES" ]; then   FULL_PATH_FILES=$(echo "$RELATIVE_FILES" | while IFS= read -r file; do echo "${PREFIX}${file}"; done) &&   (echo "Files found in Current Directory Tree ($PREFIX):" &&    printf '%s\n' "$FULL_PATH_FILES" &&    echo "" &&    awk -v prefix="$PREFIX" 'FNR==1 {print "\nFile: " prefix FILENAME} {print FNR ": " $0}' $RELATIVE_FILES   ) > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .; else   echo "No JS/JSX files found in current directory tree ($PREFIX)" > be_js_files_dump.txt && code be_js_files_dump.txt && explorer .; fi


# FE without test files
PREFIX=$(git rev-parse --show-prefix) && RELATIVE_FILES=$(git ls-files -- '*.js' '*.jsx' . | grep -v -e '\.test\.' -e '^test-') && if [ -n "$RELATIVE_FILES" ]; then   FULL_PATH_FILES=$(echo "$RELATIVE_FILES" | while IFS= read -r file; do echo "${PREFIX}${file}"; done) &&   (echo "Files found in Current Directory Tree ($PREFIX):" &&    printf '%s\n' "$FULL_PATH_FILES" &&    echo "" &&    awk -v prefix="$PREFIX" 'FNR==1 {print "\nFile: " prefix FILENAME} {print FNR ": " $0}' $RELATIVE_FILES   ) > fe_js_files_dump.txt && code fe_js_files_dump.txt && explorer .; else   echo "No non-test JS/JSX files found in current directory tree ($PREFIX)" > fe_js_files_dump.txt && code fe_js_files_dump.txt && explorer .; fi

# with test files
PREFIX=$(git rev-parse --show-prefix) && RELATIVE_FILES=$(git ls-files -- '*.js' '*.jsx') && if [ -n "$RELATIVE_FILES" ]; then   FULL_PATH_FILES=$(echo "$RELATIVE_FILES" | while IFS= read -r file; do echo "${PREFIX}${file}"; done) &&   (echo "Files found in Current Directory Tree ($PREFIX):" &&    printf '%s\n' "$FULL_PATH_FILES" &&    echo "" &&    awk -v prefix="$PREFIX" 'FNR==1 {print "\nFile: " prefix FILENAME} {print FNR ": " $0}' $RELATIVE_FILES   ) > fe_js_files_dump.txt && code fe_js_files_dump.txt && explorer .; else   echo "No JS/JSX files found in current directory tree ($PREFIX)" > fe_js_files_dump.txt && code fe_js_files_dump.txt && explorer .; fi

jsinspect --reporter json ./ > jsinspect_output.json ; code jsinspect_output.json


# Get unstaged and staged changes
(
UNSTAGED=$(git diff --name-status --find-renames); 
[ -n "$UNSTAGED" ] && # if unstaged changes exist
  echo -e "Unstaged Changes:\n$UNSTAGED\n$(git diff --diff-algorithm=minimal -U10 --color=never)\n\n";
STAGED=$(git diff --cached --name-status --find-renames); 
[ -n "$STAGED" ] && # if staged changes exist
  echo -e "Staged Changes:\n$STAGED\n$(git diff --cached --diff-algorithm=minimal -U10 --color=never)"
) > git_diff_output.txt && code git_diff_output.txt


# get the live output of the script while it runs and automatically copy it to the clipboard after it finishes, removing the color codes
reset; ./refactor.sh 2>&1 | stdbuf -oL sed 's/\x1b\[[0-9;]*m//g' | tee >(clip)

# run a specific test and copy the output to the clipboard after it finishes, removing the color codes
npx jest tests/api/legalMatters 2>&1 | stdbuf -oL sed 's/\x1b\[[0-9;]*m//g' | tee /dev/tty | clip
npx jest tests/api/legalMatters 2>&1 | stdbuf -oL sed 's/\x1b\[[0-9;]*m//g' | tee >(clip)

# automatically copy test results to clipboard in correct format for pasting to LLM
TMPF=$(mktemp -p .) && { npx jest --no-color tests/api/us 2>&1 | perl -CS -pe 's/[\x{00D7}\x{2715}]/x/g; s/\x{25CB}/o/g; s/\x{25CF}/*/g; s/\x{203A}/>/g;' | tee "$TMPF" && powershell.exe -noprofile -command "Get-Content -Encoding utf8 -Raw '$TMPF' | Set-Clipboard" ; } ; rm "$TMPF"

TMPF=$(mktemp -p .) && { npm run test:api:documents 2>&1 | perl -CS -pe 's/[\x{00D7}\x{2715}]/x/g; s/\x{25CB}/o/g; s/\x{25CF}/*/g; s/\x{203A}/>/g;' | tee "$TMPF" && powershell.exe -noprofile -command "Get-Content -Encoding utf8 -Raw '$TMPF' | Set-Clipboard" ; } ; rm "$TMPF"

System Prompt: Python Refactoring Script Generator (JSON Output)

Your Role: You are an intelligent Python script generator specializing in code refactoring analysis. Your task is to analyze a code dump and a Goal Description to produce a Python script (refactor.py). This Python script will perform the code analysis and transformation logic, outputting the results in a specific JSON format.

Workflow Context: The generated refactor.py script will be executed by a fixed Bash wrapper script (refactor.sh).

  • The Bash script will provide the code dump to refactor.py via standard input (sys.stdin).
  • refactor.py must print a single JSON object containing the refactoring results to standard output (sys.stdout).
  • The Bash script will parse this JSON using jq.
  • The Bash script will then use the information from the JSON (file paths, final content) and a companion utils.sh script to perform prerequisite checks, file backups, writing the final file content, tracking created files, and handling cleanup/rollback.

Input Requirements:

  1. This System Prompt.
  2. Code Dump: The multi-line string dump representing initial file states, paths, and line numbers. Note: While the Python script must handle the full dump format, for verification steps (see below), minimize the input provided to the script to include ONLY the files strictly necessary for the refactoring goal to conserve tokens.
  3. Goal Description: The specific refactoring/cleanup task requested by the user.

Output Requirements:

  • Generate ONLY the refactor.py Python script code block. Do not include explanations before or after the code block.
  • The script must be valid Python 3 and use standard libraries (sys, json, re, io).

refactor.py Script Requirements:

  1. Read Input: Read the entire code dump from sys.stdin.
  2. Parse Dump: Include robust logic to parse the input dump into an internal representation (e.g., a dictionary mapping file paths to their content strings). Handle file boundary markers and potential line number prefixes correctly.
  3. Analyze & Transform: Implement the core refactoring logic based on the user's Goal Description and the parsed dump content. Perform all necessary text manipulations in memory.
    • Data Flow: Carefully track variable definitions and usage when moving code between files. Ensure functions are called after their required input variables are defined in the execution flow.
    • Interface Consistency: When extracting functions or modifying calls between files, pay close attention to the expected input types (e.g., absolute vs. relative paths, data structures) and ensure the calling code provides/receives the correct format.
    • Code Removal: Use robust methods for removing code blocks from existing files. Prefer precise line number ranges (0-based indices for Python slicing) calculated carefully. Important: When removing multiple blocks, perform deletions in reverse order of appearance (bottom-to-top) to avoid index shifting issues, or recalculate indices meticulously after each deletion. Well-anchored, specific regex can be an alternative if line numbers are unreliable, but avoid overly broad patterns.
  4. Error Handling (Python Script): Handle potential issues during analysis (e.g., file not found in dump, context mismatch for modifications). Report non-critical issues via the "warnings" array in the output JSON. Exit with a non-zero status code and print errors to stderr for critical failures that prevent generating valid output.
  5. Generate JSON Output: Print a single, valid JSON object to sys.stdout containing the final state. The JSON object MUST have the following top-level keys:
    • files_to_modify: An array of strings, listing the relative paths of all files that were modified.
    • files_to_create: An array of strings, listing the relative paths of all new files to be created.
    • dirs_to_check: An array of strings, listing the relative paths of directories that the Bash wrapper should verify exist before applying changes (usually parent directories of modified/created files).
    • file_contents: A JSON object (dictionary) where:
      • Keys are the relative file paths (strings) of all modified and created files.
      • Values are the complete, final content (strings) for each corresponding file path after all transformations have been applied.
    • warnings (Optional): An array of strings describing any non-critical issues encountered during the refactoring analysis.

Verification via Direct Execution (Mandatory):

  • Before outputting the refactor.py script, you MUST verify its core logic by directly executing it within a tool_code block.
  • Simulate Input: Define the minimal necessary subset of the original Code Dump as a string variable within the tool_code block to simulate sys.stdin.
  • Execute Logic: Run the main refactoring functions (parse_code_dump, main, etc.) directly within the tool_code block, passing the simulated input. Do not use subprocess unless direct execution is impossible (and clearly explain why).
  • Capture Output: Capture the generated JSON data into a Python variable.
  • Print Output: print() the captured JSON output (or any error messages encountered during the direct execution) within the tool_code block.
  • Analyze Result: If the direct execution fails or produces unexpected JSON, revise the Python script logic until the execution succeeds and the output is correct. Only output the final refactor.py script code block after successful direct execution verification.

Python Script & Refactoring Best Practices:

  • Prioritize clarity and maintainability in the generated Python code.
  • Use context-aware methods (like iterating through lines with state, careful regex with context checks, calculated line number slicing, or simple block replacement) for modifications rather than relying solely on brittle line numbers, especially for complex changes.
  • Ensure correct indentation and syntax.
  • Add basic safety checks (e.g., checking if arrays/objects exist before accessing properties) in the generated Javascript code if the refactoring logic could introduce potential runtime errors.

Your Role: You are an intelligent Bash script generator specializing in code refactoring and cleanup. Your primary function is to analyze a provided code dump (codebase-llm-overview.txt) based on a high-level Goal Description, determine the precise line-level changes required, and generate the simplest possible Bash script that executes these changes using primarily sed -i (in-place edit, no backups) and direct code embedding techniques (like Here Documents) for clarity.

Input Requirements:

  1. This System Prompt: To establish your role and constraints.
  2. codebase-llm-overview.txt Content: A multi-line string representing the state of relevant files, including relative paths and line-numbered content. This dump is the single source of truth for file paths, content, and line numbers for your analysis.
  3. Goal Description: A clear, concise description of the refactoring or cleanup task to be performed.

Core Task & Constraints:

  1. Analyze Dump & Goal: Carefully parse the codebase-llm-overview.txt and interpret the Goal Description to identify the specific files, line numbers, and content modifications required.
  2. Generate Bash Script: Your output MUST be a valid Bash script designed to achieve the specified Goal.
  3. Prioritize Simplicity & Clarity: The generated script should be as straightforward and easy to understand as possible.
  4. Modification Tools:
    • Use sed -i for Deletions: Use sed -i 'Nd' or sed -i 'start,end d' for deleting specific lines or blocks.
    • Use sed -i for Simple Replacements: Use sed -i 'Ns/.*/new content/' (replace whole line) or sed -i 'Ns/old/new/' (replace part of line) for straightforward, single-line substitutions.
    • Use Here Documents (cat << 'EOF' > file) for Insertions/Creations: For inserting multi-line blocks of code or creating new files with specific content, prefer using Here Documents (cat << 'EOF' > filename or cat << 'EOF' >> filename for appending). Embed the literal code block directly within the Here Document. Use the quoted 'EOF' form to prevent shell variable expansion within the embedded code.
    • Avoid Complex sed: Do not generate complex, multi-line sed insertion/append scripts (i or a commands with escaped newlines) unless absolutely necessary for a trivial single-line insertion. Favor Here Documents for clarity.
  5. Precise Targeting: Target ONLY the exact files and line numbers identified during your analysis based on the codebase-llm-overview.txt. Line numbers refer strictly to the provided codebase-llm-overview.txt state before modifications within the script begin (unless explicitly deleting lines affects subsequent sed commands targeting numbers in the same file).
  6. No Backups: Adhere strictly to sed -i without the .bak suffix. The user relies on external version control (git).
  7. Embed Descriptions: The generated script MUST include clear echo statements before each logical operation (e.g., before a sed command or a cat << EOF block). These echo statements should describe the file being modified and the action being taken.
  8. Robustness: Include basic checks at the start of the generated script to verify that the identified target files/directories exist. Exit if prerequisites are missing.
  9. Output Format: Output ONLY the Bash script code block.

Example User Interaction:

User Provides:

[Your System Prompt Above]

--- START codebase-llm-overview.txt ---
Files found in Current Directory Tree (app/):
app/logic.js

File: app/logic.js
1: // Main application logic
2: const fs = require('fs');
3: const path = require('path');
4: const { externalHelper } = require('./utils'); // Assume './utils' exists
5:
6: /**
7:  * Processes item data.
8:  * Uses fs and externalHelper.
9:  * @param {object} data - The data to process.
10:  * @returns {string} Processed result.
11:  */
12: function processItem(data) {
13:   console.log('Processing item:', data.id);
14:   const processed = externalHelper(data.value);
15:   const filePath = path.join(__dirname, 'output', `${data.id}.txt`);
16:   fs.writeFileSync(filePath, processed);
17:   console.log('Item processed and saved.');
18:   return `Processed ${data.id}`;
19: }
20:
21: /**
22:  * Validates user input.
23:  * @param {string} input - User input.
24:  * @returns {boolean} Is valid.
25:  */
26: function validateInput(input) {
27:   console.log('Validating:', input);
28:   return typeof input === 'string' && input.length > 3;
29: }
30:
31: // Main execution flow
32: function run(items, input) {
33:   if (!validateInput(input)) {
34:     console.error('Invalid input provided.');
35:     return;
36:   }
37:   items.forEach(item => {
38:     processItem(item);
39:   });
40:   console.log('Run completed.');
41: }
42:
43: module.exports = { run };
--- END codebase-llm-overview.txt ---

--- START Goal Description ---
Refactor 'app/logic.js':
1. Move the 'processItem' function (including its JSDoc, lines 6-19) into a new file 'app/processItem.js'. Include necessary imports ('fs', 'path', 'externalHelper') in the new file and add the correct module export.
2. Move the 'validateInput' function (including its JSDoc, lines 21-29) into a new file 'app/validateInput.js'. Add the correct module export (it needs no external imports).
3. Modify 'app/logic.js' to remove the original function definitions (lines 6-19 and 21-29) and the now-unused imports (lines 2-4). Add new imports for 'processItem' and 'validateInput' at the top (after line 1).
--- END Goal Description ---

My Generated Output (/refactor.sh):

#!/bin/bash
echo "Goal: Move processItem and validateInput from $LOGIC_JS to separate files."

# --- Configuration ---
# Files identified for modification/creation (relative paths from dump)
LOGIC_JS="app/logic.js"
PROCESS_ITEM_JS="app/processItem.js"
VALIDATE_INPUT_JS="app/validateInput.js"

echo "--- Starting Goal-Oriented Refactoring Script ---"

# --- Verification ---
FILES_TO_CHECK=("$LOGIC_JS") # Only check existing file
ERRORS=0
for file in "${FILES_TO_CHECK[@]}"; do
  if [ ! -f "$file" ]; then
    echo "ERROR: Expected source file not found: $file"
    ERRORS=1
  fi
done
# Check if target directory exists
TARGET_DIR=$(dirname "$LOGIC_JS")
if [ ! -d "$TARGET_DIR" ]; then
    echo "ERROR: Target directory '$TARGET_DIR' does not exist."
    ERRORS=1
fi

if [ $ERRORS -ne 0 ]; then
  echo "Exiting due to missing file or directory."
  exit 1
fi
echo "Source file and target directory verified."

# --- Step 1: Create app/processItem.js with extracted content ---
echo "Creating $PROCESS_ITEM_JS with processItem function and imports..."
cat << 'EOF' > "$PROCESS_ITEM_JS"
const fs = require('fs');
const path = require('path');
const { externalHelper } = require('./utils'); // Assume './utils' exists

/**
 * Processes item data.
 * Uses fs and externalHelper.
 * @param {object} data - The data to process.
 * @returns {string} Processed result.
 */
function processItem(data) {
  console.log('Processing item:', data.id);
  const processed = externalHelper(data.value);
  const filePath = path.join(__dirname, 'output', `${data.id}.txt`);
  fs.writeFileSync(filePath, processed);
  console.log('Item processed and saved.');
  return `Processed ${data.id}`;
}

module.exports = { processItem };
EOF

# --- Step 2: Create app/validateInput.js with extracted content ---
echo "Creating $VALIDATE_INPUT_JS with validateInput function..."
cat << 'EOF' > "$VALIDATE_INPUT_JS"
/**
 * Validates user input.
 * @param {string} input - User input.
 * @returns {boolean} Is valid.
 */
function validateInput(input) {
  console.log('Validating:', input);
  return typeof input === 'string' && input.length > 3;
}

module.exports = { validateInput };
EOF

# --- Step 3: Modify app/logic.js (In-Place) ---
echo "Modifying $LOGIC_JS..."

# Delete original validateInput block (Lines 21-29 first, higher lines)
echo "  Deleting original validateInput block (Lines 21-29)"
sed -i '21,29d' "$LOGIC_JS"

# Delete original processItem block (Lines 6-19 next)
echo "  Deleting original processItem block (Lines 6-19)"
sed -i '6,19d' "$LOGIC_JS"

# Delete original unused imports (Lines 2-4)
echo "  Deleting original unused imports (Lines 2-4)"
sed -i '2,4d' "$LOGIC_JS"

# Add new imports for the moved functions after line 1
echo "  Adding new imports for processItem and validateInput after line 1"
# Using sed 'a' for inserting after a line is simple enough here
IMPORT_LINE1="const { processItem } = require('./processItem');"
IMPORT_LINE2="const { validateInput } = require('./validateInput');"
sed -i "1a ${IMPORT_LINE1}\\n${IMPORT_LINE2}" "$LOGIC_JS"


echo ""
echo "--- Refactoring Script Finished ---"
echo "Created new files:"
echo "  - $PROCESS_ITEM_JS"
echo "  - $VALIDATE_INPUT_JS"
echo "Modified original file:"
echo "  - $LOGIC_JS (removed functions/imports, added new imports)"
# removes initial filename comment from all .js files in the current directory and its subdirectories
git ls-files --cached --others --exclude-standard -- '*.js' | while read -r file; do relpath="${file#./}"; comment="// $relpath"; first_line=$(head -n 1 "$file"); if [ "$first_line" = "$comment" ]; then tail -n +2 "$file" > tmp && mv tmp "$file"; fi; done

always follow the TDD structure when planning your next steps:

  • write simple sanity tests for feature you're adding first
  • make simple tests pass by writing code
  • write more tests testing full feature
  • make all tests pass

follow SOLID coding practices and implement features with THE LEAST CHANGES to codebase as possible (unless prompted to do a refactoring of whole existing test or source code file)

prefer to make a new file for new functionality than mixing different tasks in the same file

always respond with a full file with no code removed, expect your response to be copypasted 1:1 to vscode editor in full, replacing full contents of the file you're editing

when suggesting multiple files to be edited, first print a bash oneliner which will open all files in vscode editor

when debugging tests, always debug them ONE BY ONE skipping all successful tests to keep the logging noise down

NEVER mock the logger, use real logs to debug failing tests, and always use the logger in the code you're writing

#!/usr/bin/env bash
# teeclip.sh: Executes a command, shows live output (stdout & stderr),
# and copies the complete combined output (prepended with the
# command itself) to the clipboard after the command finishes.
# Preserves exit code.
# Conditionally converts LF to CRLF for Windows clip.exe.
# --- Configuration ---
set -o pipefail
CLIP_CMD="clip.exe"
# Uncomment for macOS:
# CLIP_CMD="pbcopy"
# Uncomment and ensure installed for Linux (X11):
# CLIP_CMD="xclip -selection clipboard"
# --- Input Validation ---
if [ $# -eq 0 ]; then
echo "Usage: teeclip <command> [args...]" >&2
echo "Example: teeclip node my-script.js --arg1 value" >&2
exit 1
fi
# --- Temporary File ---
TEMP_OUTPUT_FILE=""
if command -v mktemp &>/dev/null; then
TEMP_OUTPUT_FILE=$(mktemp --suffix=.log teeclip_output_XXXXXX)
else
TEMP_OUTPUT_FILE="teeclip_output_$(date +%s).log"
fi
if [ -z "$TEMP_OUTPUT_FILE" ]; then
echo "Error: Failed to create temporary output file." >&2
exit 1
fi
# --- Cleanup Trap ---
# Quietly remove the temporary file on exit
trap 'rm -f "$TEMP_OUTPUT_FILE"' EXIT INT TERM
# --- Execution & Capture ---
# echo "[teeclip] Executing: $@" # Removed
# echo "[teeclip] Streaming output (stdout & stderr)..." # Removed
# echo "--- Command Output Start ---" # Removed
# Use awk to tee output AND fix newlines in one go for the temp file
"$@" 2>&1 | awk -v outfile="$TEMP_OUTPUT_FILE" 'BEGIN{RS="\n"} {print $0 > "/dev/stdout"; gsub(/\r$/,""); printf "%s\r\n", $0 >> outfile }'
CMD_EXIT_CODE=${PIPESTATUS[0]} # Get exit code of the original command ($@), not awk
# Show completion status only if there was an error
if [ $CMD_EXIT_CODE -ne 0 ]; then
echo "--- Command Finished (Exit Code: $CMD_EXIT_CODE) ---"
fi
# --- Copy to Clipboard ---
if [ -s "$TEMP_OUTPUT_FILE" ]; then
# echo "[teeclip] Copying processed (CRLF) output to clipboard via $CLIP_CMD..." # Removed
# Construct the command prefix string, simulating a prompt
COMMAND_PREFIX="$ $@"
# Use a subshell (...) to group the printf and cat commands before piping
if (
printf "%s\r\n" "$COMMAND_PREFIX"
printf "\r\n"
cat "$TEMP_OUTPUT_FILE"
) | $CLIP_CMD; then
# echo "[teeclip] Output successfully copied to clipboard." # Removed
: # No output on success
else
CLIP_EXIT_CODE=$?
# Still show errors
echo "[teeclip] Error: Failed to copy output to clipboard (Command: $CLIP_CMD, Exit Code: $CLIP_EXIT_CODE)." >&2
fi
else
# Still show warnings
echo "[teeclip] Warning: Temporary output file is empty or wasn't created. Nothing to copy." >&2
fi
# --- Final Exit ---
# echo "[teeclip] Exiting with original command's exit code ($CMD_EXIT_CODE)." # Removed
exit $CMD_EXIT_CODE
# unstages files that have no content changes but have been staged for commit
git diff --staged --name-only | xargs -I {} sh -c 'if [ -z "$(git diff --staged -w -- "$1")" ]; then echo "Unstaging (no content change): $1"; git restore --staged -- "$1"; fi' -- {}
#!/bin/bash
# ==============================================================================
# Bash Utility Script for Refactoring Operations (utils.sh) - v5.0
# ==============================================================================
# Changelog:
# - v5.0:
# - Added new read-only helpers: find_line, find_line_num_after, find_block_lines,
# read_before_line, read_before_regex, read_after_line, read_after_regex,
# read_between_lines, read_between_regex (prioritizing grep/core tools).
# - Simplified read_block implementation using head/tail.
# - Added contextual logging (surrounding lines) to stderr on modification failures.
# - Added timestamps to internal _log messages (stderr).
# - Retained awk for complex stateful find/replace (find_block_lines, read_between_regex, repl_between).
# - v4.3:
# - Added Git workspace check in setup: Exit if unstaged changes exist.
# - Modified Prettier failure handling: For tracked files, attempt `git restore`.
# - v4.2: Fixed Bash syntax error in repl_between function.
# - v4.1: Fixed missing 'fi' in run_prettier function.
# - v4.0: Major refactor (Shortened names, repl_between, etc.)
#
# Provides enhanced functions for:
# - Simplified setup and phase management.
# - Verification of prerequisites & clean Git workspace.
# - Accessing file versions (current disk, HEAD commit).
# - Finding lines/blocks dynamically (find_* helpers).
# - Extracting file content segments (read_* helpers).
# - Implicit file backup within modification helpers.
# - Helper functions for common modifications (del_line, ins_after, etc.).
# - Tracking newly created files for rollback.
# - Targeted Prettier execution on modified/created/moved files.
# - Automatic cleanup and rollback with contextual error logging.
#
# Usage (Simplified):
# 1. Define arrays in the calling script (DIRS_TO_CHECK, FILES_TO_MODIFY, etc.).
# 2. Ensure NO UNSTAGED changes exist in Git before running. Stage any intended changes.
# 3. Source this script: `source ./utils.sh || exit 1`
# 4. Call setup: `setup_op`
# 5. Prepare content variables (use read_*, find_*, heredocs).
# 6. Start modification phase: `mod_start` (enables set -e)
# 7. Call helper functions (repl_between, new_file, del_line using dynamic lines, etc.).
# 8. End modification phase: `mod_end` (disables set -e)
# 9. Script runs Prettier (or other validation) in cleanup.
# 10. On failure, check stderr for contextual logs around the error point.
#
# ==============================================================================
# --- Internal State Variables ---
TEMP_DIR=""
declare -A BACKUP_MAP # Associative array: BACKUP_MAP[original_path]=backup_path
declare -a CREATED_FILES_TRACKER=() # Array of paths created/copied by the script (incl. move destinations)
SCRIPT_PHASE="INIT" # Tracks execution phase: INIT -> SETUP -> MODIFYING -> DONE
SCRIPT_ERRORS_OCCURRED=0 # Track if errors occurred during the run for final status
GIT_AVAILABLE=false # Track if git command is available
IS_GIT_REPO=false # Track if inside a Git repository
# --- Internal Helper: Logging (Primarily for stderr/debug) ---
_log() {
local level="$1"
shift
# Add ISO timestamp
echo "[${level} $(date -u +'%H:%M:%S')] $@" >&2
}
# --- Markdown Output Helper Functions (stdout) ---
# (No changes from previous version - _md_h2, _md_h3, _md_hr, _md_li, _md_li_simple, etc.)
_md_h2() { echo -e "\n## $@"; }
_md_h3() { echo -e "\n### $@"; }
_md_hr() { echo -e "\n---"; }
_md_li() {
local message="$1"
local exit_status="${2:-0}" # Default to 0 if status not provided
local status_marker="**OK**"
if [[ "$exit_status" -ne 0 ]]; then
status_marker="**FAILED** (Code: $exit_status)"
SCRIPT_ERRORS_OCCURRED=1 # Mark that an error happened
fi
echo "* $message ... ${status_marker}"
}
_md_li_simple() { echo "* $@"; } # For list items without status
_md_codeblock_start() { echo -e "\n\`\`\`${1:-text}"; } # Optional language hint
_md_codeblock_end() { echo '```'; }
_md_note() { echo -e "\n*Note: $@*"; }
_md_error_report() {
echo -e "\n**ERROR:** $@"
SCRIPT_ERRORS_OCCURRED=1
} # For reporting errors in Markdown
_md_warn_report() { echo -e "\n**WARN:** $@"; }
# --- Internal Helper: Contextual Logging on Error ---
_log_context_on_error() {
local file="$1"
local line_num="$2" # The target line number where error occurred
local msg="$3" # Error message prefix
if [[ ! -f "$file" ]]; then
_log "ERROR" "$msg: File '$file' not found for context."
return
fi
# Ensure line_num is a positive integer, default to 0 if not
if ! [[ "$line_num" =~ ^[1-9][0-9]*$ ]]; then
line_num=0 # Set to 0 to show start of file if line num invalid
fi
local total_lines=$(wc -l <"$file" | awk '{print $1}') # Get total lines
# Calculate context range, handling boundaries
local start_context_line=$((line_num - 3))
[[ "$start_context_line" -lt 1 ]] && start_context_line=1
local end_context_line=$((line_num + 3))
[[ "$end_context_line" -gt "$total_lines" ]] && end_context_line="$total_lines"
# Handle case where target line itself is out of bounds (already logged by caller)
if [[ "$line_num" -gt 0 && "$line_num" -le "$total_lines" ]]; then
_log "DEBUG" "$msg: Context around target line ($line_num) in $file:"
sed -n "${start_context_line},${end_context_line}p" "$file" | while IFS= read -r line; do _log "DEBUG" " $line"; done
elif [[ "$line_num" -eq 0 ]]; then # Handle insertion at line 0 case
_log "DEBUG" "$msg: Context near start of file ($file) for line 0 insertion:"
head -n 3 "$file" | while IFS= read -r line; do _log "DEBUG" " $line"; done
else # line_num > total_lines (already handled by caller, but log context near end)
_log "DEBUG" "$msg: Context near end of file ($file) for out-of-bounds line ($line_num):"
tail -n 5 "$file" | while IFS= read -r line; do _log "DEBUG" " $line"; done
fi
}
# --- Cleanup Function (Trap Handler) ---
# (No changes from previous version - handles rollback, Prettier, temp dir cleanup)
#-------------------------------------------------------------------------------
cleanup() {
local exit_status=$? # Capture exit status immediately
local original_phase="$SCRIPT_PHASE" # Capture phase before potential changes
# Ensure errexit is off during cleanup to attempt all steps
set +e
# --- Rollback Logic (stderr logging) ---
if [[ $exit_status -ne 0 && "$original_phase" == "MODIFYING" ]]; then
SCRIPT_ERRORS_OCCURRED=1 # Mark error
_log "ERROR" "*** SCRIPT FAILED DURING MODIFICATION (Exit Code: $exit_status) ***"
_log "INFO" "Attempting rollback from backups..."
# Restore modified/deleted/moved(source) files from backup
if [[ ${#BACKUP_MAP[@]} -gt 0 ]]; then
_log "INFO" " Restoring original files from backup:"
for original_path in "${!BACKUP_MAP[@]}"; do
local backup_path="${BACKUP_MAP[$original_path]}"
if [[ -f "$backup_path" ]]; then
local parent_dir
parent_dir=$(dirname "$original_path")
if [[ ! -d "$parent_dir" ]]; then
_log "WARN" " Parent directory '$parent_dir' for '$original_path' does not exist during rollback. Attempting to recreate..."
if ! mkdir -p "$parent_dir"; then
_log "ERROR" " Failed to recreate parent directory '$parent_dir'. Cannot restore '$original_path'."
continue
fi
fi
if cp -f "$backup_path" "$original_path"; then
_log "INFO" " Restored: $original_path"
else
_log "ERROR" " Failed to restore $original_path from $backup_path"
fi
else
_log "ERROR" " Backup file missing for $original_path (Expected at $backup_path)"
fi
done
else
_log "INFO" " No files were backed up for restoration."
fi
# Remove newly created files / move destinations
if [[ ${#CREATED_FILES_TRACKER[@]} -gt 0 ]]; then
_log "INFO" " Removing newly created files / move destinations:"
# Process in reverse order of creation for potentially nested items
for ((i = ${#CREATED_FILES_TRACKER[@]} - 1; i >= 0; i--)); do
local created_item="${CREATED_FILES_TRACKER[i]}"
if [[ -f "$created_item" ]]; then
if rm "$created_item"; then
_log "INFO" " Removed File: $created_item"
else
_log "ERROR" " Failed to remove created file $created_item"
fi
elif [[ -d "$created_item" ]]; then
# Optionally remove empty dirs created? Safer to leave for manual review.
# Try rmdir, which fails if not empty
if rmdir "$created_item" 2>/dev/null; then
_log "INFO" " Removed Empty Directory: $created_item"
else
_log "WARN" " Skipped removing tracked directory during rollback (not found or not empty): $created_item"
fi
else
_log "WARN" " Skipped: Created file/dir $created_item not found during rollback."
fi
done
else
_log "INFO" " No newly created files/destinations were tracked for removal."
fi
_log "INFO" "*** ROLLBACK ATTEMPT FINISHED ***"
# Report failure in Markdown to stdout as well
_md_error_report "Script failed during modification phase. Rollback attempted from backups. See logs above (stderr) for details."
elif [[ $exit_status -ne 0 ]]; then # Error not during MODIFYING phase (e.g., setup or validation failure)
SCRIPT_ERRORS_OCCURRED=1
_log "ERROR" "*** SCRIPT FAILED (Phase: $original_phase, Exit Code: $exit_status) ***"
_log "INFO" "Backup-based rollback not triggered for non-modification phase failures."
# Git restore handled validation failures in run_prettier.
# If setup failed, no changes were made.
if [[ "$original_phase" == "SETUP_FAILED" ]]; then
_md_error_report "Script failed during setup. No changes were made."
elif [[ "$original_phase" == "DONE" ]]; then
# Failure likely occurred during validation (e.g., Prettier)
_md_error_report "Script completed modifications but failed during final validation/formatting phase. Git revert attempted for failed tracked files (see logs)."
else
# General error, shouldn't happen if phases are managed correctly
_md_error_report "Script failed during phase '$original_phase'. Check logs."
fi
# If script failed AFTER modifying but BEFORE DONE (shouldn't happen with set -e),
# the files created might need cleanup. The MODIFYING phase check handles this.
# If failure was in validation (Prettier), cleanup of *newly created* files that failed might still be needed.
if [[ "$original_phase" == "DONE" && ${#CREATED_FILES_TRACKER[@]} -gt 0 ]]; then
_log "INFO" "Checking if any newly created files need removal due to validation failure..."
# This is complex: we don't know *which* created file failed validation easily here.
# For now, rely on the Prettier log indicating failure on a new file, and manual cleanup.
# A more robust solution would pass failed file list from run_prettier to cleanup.
fi
fi # End of failure type checks
# --- Final Actions (Run regardless of success/failure, unless setup failed early) ---
if [[ "$original_phase" != "INIT" && "$original_phase" != "SETUP_FAILED" ]]; then
# Run Prettier only if script completed the DONE phase (even if exit_status non-zero from Prettier itself)
if [[ "$original_phase" == "DONE" ]]; then
# Collect files: modified in-place (backup keys) + created/moved-to (tracker)
local files_to_format=()
[[ ${#BACKUP_MAP[@]} -gt 0 ]] && files_to_format+=("${!BACKUP_MAP[@]}")
[[ ${#CREATED_FILES_TRACKER[@]} -gt 0 ]] && files_to_format+=("${CREATED_FILES_TRACKER[@]}")
if [[ ${#files_to_format[@]} -gt 0 ]]; then
# Deduplicate and ensure files exist before formatting
local unique_files_to_format=()
declare -A seen_files_map
for file in "${files_to_format[@]}"; do
if [[ -z "${seen_files_map[$file]}" ]]; then
if [[ -f "$file" ]]; then # Only format existing files
unique_files_to_format+=("$file")
seen_files_map["$file"]=1
fi
fi
done
if [[ ${#unique_files_to_format[@]} -gt 0 ]]; then
# run_prettier handles its own Markdown output
# It returns non-zero if it failed, contributing to final exit status
run_prettier "${unique_files_to_format[@]}"
else
_md_hr
_md_h3 "Formatting Phase (Prettier)"
_md_li_simple "Script completed, but no existing files were tracked for formatting."
fi
else
_md_hr
_md_h3 "Formatting Phase (Prettier)"
_md_li_simple "Script completed, but no files were tracked for formatting."
fi
fi # End if phase was DONE
# Always remove the temporary directory if it exists
if [[ -n "$TEMP_DIR" && -d "$TEMP_DIR" ]]; then
rm -rf "$TEMP_DIR"
fi
# Final Status Summary
_md_hr
if [[ "$SCRIPT_ERRORS_OCCURRED" -eq 0 ]]; then
_md_h3 "Script Execution Summary"
_md_li_simple "Completed successfully."
else
_md_h3 "Script Execution Summary"
_md_li_simple "Finished with errors. Please review output and logs (stderr)."
fi
_md_hr
fi
# Preserve the original exit status for the script process
exit $exit_status
} # end cleanup
# --- Setup Function ---
# (No changes from previous version - initializes temp dir, sets trap, verifies Git status & prerequisites)
#-------------------------------------------------------------------------------
setup_op() {
_md_h2 "Script Execution: Refactoring Task"
_md_hr
_md_h3 "Setup & Prerequisites"
_md_li_simple "Initializing refactor environment..."
TEMP_DIR=$(mktemp -d -t refactor_backup_XXXXXX)
local setup_status=$?
if [[ "$setup_status" -ne 0 || ! -d "$TEMP_DIR" ]]; then
_md_error_report "Failed to create temporary directory."
set_script_phase "SETUP_FAILED"
exit 1
fi
_log "DEBUG" "Temporary directory created: $TEMP_DIR"
trap cleanup EXIT ERR INT
setup_status=$?
_md_li "Setting up traps" $setup_status
[[ "$setup_status" -ne 0 ]] && set_script_phase "SETUP_FAILED" && exit 1
set_script_phase "SETUP"
# --- Git Availability and Repository Check ---
if command -v git &>/dev/null; then
GIT_AVAILABLE=true
if git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
IS_GIT_REPO=true
_log "DEBUG" "Git command found and inside a Git repository."
else
IS_GIT_REPO=false
_log "DEBUG" "Git command found, but not inside a Git repository."
fi
else
GIT_AVAILABLE=false
IS_GIT_REPO=false
_log "DEBUG" "Git command not found."
fi
# --- Git Working Directory Check ---
_md_li_simple "Verifying Git working directory state..."
if [[ "$IS_GIT_REPO" == true ]]; then
# Check specifically for unstaged changes (working dir vs index)
if ! git diff --quiet --; then
local unstaged_files
mapfile -t unstaged_files < <(git diff --name-only)
_md_error_report "Unclean Git Working Directory. Unstaged changes detected:"
_md_codeblock_start text
printf "%s\n" "${unstaged_files[@]}"
_md_codeblock_end
_md_li_simple "Please stash or commit unstaged changes before running the script."
_md_li "Working directory clean vs index" 1 # Report failure
set_script_phase "SETUP_FAILED"
exit 1
else
# Check if there are staged changes
if ! git diff --quiet --cached --; then
_md_note "You have staged changes. Script will proceed. Failures on tracked files during validation will revert to the STAGED state."
_md_codeblock_start git
git diff --cached --stat
_md_codeblock_end
_md_li "Working directory clean vs index (staged changes detected)" 0
else
# No unstaged AND no staged changes
_md_li "Working directory clean vs index (no staged changes)" 0
fi
fi
elif [[ "$GIT_AVAILABLE" == true ]]; then
_md_warn_report "Current directory is not a Git repository. Cannot enforce clean working directory state."
_md_li "Working directory clean vs index (Not applicable)" 0 # Treat as OK
else
_md_warn_report "'git' command not found. Cannot enforce clean working directory state."
_md_li "Working directory clean vs index (Not applicable)" 0 # Treat as OK
fi
# --- End Git Check ---
# Verify prerequisites - exits on failure
_log "INFO" "Verifying prerequisites..."
check_reqs
local verify_status=$? # Capture status from check_reqs
# Use verify_status for the final setup message
_md_li "Setup phase complete" $verify_status
# If check_reqs failed, it already exited, but double-check
[[ "$verify_status" -ne 0 ]] && set_script_phase "SETUP_FAILED" && exit 1
}
# --- Phase Management Functions ---
# (No changes from previous version - mod_start, mod_end, set_script_phase)
mod_start() {
_md_hr
_md_h3 "Modifications Phase"
_log "INFO" "--- Starting Modifications (set -e enabled) ---"
set_script_phase "MODIFYING"
set -e # Enable errexit for the modification phase
}
mod_end() {
local errexit_status=$? # Check if set -e caused an early exit
set +e # Disable errexit *before* setting phase or logging
set_script_phase "DONE"
if [[ $errexit_status -eq 0 ]]; then
_md_li_simple "Modifications phase complete."
_log "INFO" "--- Modifications Complete (set -e disabled) ---"
else
# Error reporting handled by the trap/cleanup function due to `set -e` exit
: # No-op here, cleanup trap takes over
fi
# Explicitly return the status from the modification block
return $errexit_status
}
set_script_phase() {
local phase_name="$1"
SCRIPT_PHASE="$phase_name"
_log "DEBUG" "Script phase set to: $SCRIPT_PHASE"
}
# --- Prerequisites Verification Function ---
# (No changes from previous version - check_reqs)
check_reqs() {
# _log "INFO" "Verifying prerequisites..." # Logging moved to setup_op
local error_count=0
local overall_status=0
# Check specified directories that MUST exist
if declare -p DIRS_TO_CHECK &>/dev/null; then
for dir in "${DIRS_TO_CHECK[@]}"; do
if [[ ! -d "$dir" ]]; then
# Silently create the directory if it doesn't exist
mkdir -p "$dir"
if [[ $? -ne 0 ]]; then
_md_error_report "Failed to create required directory: \`$dir\`"
error_count=$((error_count + 1))
fi
fi
done
else
_log "DEBUG" "'DIRS_TO_CHECK' array not defined in calling script."
fi
# Check files to modify
if declare -p FILES_TO_MODIFY &>/dev/null; then
for file in "${FILES_TO_MODIFY[@]}"; do
if [[ ! -f "$file" ]]; then
_md_error_report "File to modify not found: \`$file\`"
((error_count++))
fi
done
else
_log "DEBUG" "'FILES_TO_MODIFY' array not defined."
fi
# Check files to create (target doesn't exist)
if declare -p FILES_TO_CREATE &>/dev/null; then
for file in "${FILES_TO_CREATE[@]}"; do
if [[ -e "$file" ]]; then
_md_error_report "Path for file to create already exists: \`$file\`"
((error_count++))
fi
done
else
_log "DEBUG" "'FILES_TO_CREATE' array not defined."
fi
# Check files to move (source exists, dest doesn't exist)
if declare -p FILES_TO_MOVE &>/dev/null; then
for pair in "${FILES_TO_MOVE[@]}"; do
local source_path="${pair%%:*}"
local dest_path="${pair#*:}"
if [[ "$source_path" == "$dest_path" || -z "$source_path" || -z "$dest_path" || "$pair" == "$source_path" ]]; then
_md_error_report "Invalid format in FILES_TO_MOVE: '$pair'. Use 'source:destination'."
((error_count++))
continue
fi
if [[ ! -f "$source_path" ]]; then
_md_error_report "File to move (source) not found: \`$source_path\`"
((error_count++))
fi
if [[ -e "$dest_path" ]]; then
_md_error_report "Move destination path already exists: \`$dest_path\`"
((error_count++))
fi
done
else
_log "DEBUG" "'FILES_TO_MOVE' array not defined."
fi
# Check files to delete
if declare -p FILES_TO_DELETE &>/dev/null; then
for file in "${FILES_TO_DELETE[@]}"; do
if [[ ! -f "$file" ]]; then
_md_error_report "File to delete not found: \`$file\`"
((error_count++))
fi
done
else
_log "DEBUG" "'FILES_TO_DELETE' array not defined."
fi
if [[ $error_count -gt 0 ]]; then
_md_li "Prerequisite verification" 1 # Report failure
_log "ERROR" "Prerequisite verification failed ($error_count errors). Exiting."
set_script_phase "SETUP_FAILED"
exit 1
else
_md_li "Prerequisite verification" 0 # Report success
_log "INFO" "Prerequisites verified successfully."
overall_status=0
fi
return $overall_status
}
# --- Backup Function (Internal Use by Helpers) ---
# (No changes from previous version - _backup_file_internal)
_backup_file_internal() {
local original_path="$1"
if [[ -z "$original_path" ]]; then
_log "ERROR" "(_backup_file_internal): No file path"
exit 1
fi
# Backup only if file exists and is a regular file
if [[ ! -f "$original_path" ]]; then
_log "DEBUG" "(_backup_file_internal): Source path is not a regular file or does not exist: $original_path. Skipping backup."
return 0 # Don't fail here, let caller handle non-existence if needed
fi
if [[ -z "$TEMP_DIR" || ! -d "$TEMP_DIR" ]]; then
_log "ERROR" "(_backup_file_internal): Temp dir invalid"
exit 1
fi
# Check if already backed up
if [[ -v BACKUP_MAP["$original_path"] ]]; then
_log "DEBUG" "Already backed up: $original_path"
return 0
fi
local safe_suffix=$(echo "$original_path" | tr '/' '_')
local backup_filename="backup_${safe_suffix}"
local backup_path="$TEMP_DIR/$backup_filename"
cp "$original_path" "$backup_path"
local status=$?
if [[ $status -eq 0 ]]; then
BACKUP_MAP["$original_path"]="$backup_path"
_log "DEBUG" "Backed up $original_path to $backup_path"
else
_log "ERROR" "Failed to back up $original_path to $backup_path"
_md_error_report "Critical backup failure for \`$original_path\`."
exit 1 # Backup failure is critical
fi
return $status
}
# --- Track Created File Function (Internal Use by Helpers) ---
# (No changes from previous version - _track_created_file_internal)
_track_created_file_internal() {
local created_path="$1"
if [[ -z "$created_path" ]]; then
_log "ERROR" "(_track_created_file_internal): No file path"
return 1 # Non-critical? Let caller handle.
fi
CREATED_FILES_TRACKER+=("$created_path")
_log "DEBUG" "Tracking created/moved-to path: $created_path"
return 0
}
# --- Internal Text Normalization Helper ---
# (No changes from previous version - _normalize_text)
_normalize_text() {
# Remove all whitespace characters (space, tab, newline, etc.)
# Handle potential empty input gracefully
if [[ -n "$1" ]]; then
tr -d '[[:space:]]' <<<"$1"
else
echo ""
fi
}
# ==============================================================================
# --- Public READ-ONLY Helper Functions (New & Updated) ---
# ==============================================================================
# --- Read Current File Content ---
# (No changes from previous version - read_curr)
read_curr() {
local file="$1"
local action_msg="Reading current content of \`$file\`"
local usage="Usage: read_curr \"/path/to/file\""
if [[ -z "$file" ]]; then
_log "ERROR" "[read_curr] No file path provided. $usage"
return 1
fi
if [[ ! -f "$file" ]]; then
_log "ERROR" "[read_curr] File not found: '$file'. $usage"
return 1 # File must exist
fi
if [[ ! -r "$file" ]]; then
_log "ERROR" "[read_curr] File not readable: '$file'. $usage"
return 1
fi
cat "$file"
local status=$?
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_curr] Failed to read file '$file' (cat exit code: $status)."
return $status
else
# Log success with snippet size for context
local content_size=$(wc -c <"$file")
_log "DEBUG" "[read_curr] Success: Read $content_size bytes from '$file'."
return 0
fi
}
# --- Read HEAD Commit File Content ---
# (No changes from previous version - read_head)
read_head() {
local file="$1"
local action_msg="Reading content of \`$file\` from HEAD commit"
local usage="Usage: read_head \"/path/to/file\""
if [[ -z "$file" ]]; then
_log "ERROR" "[read_head] No file path provided. $usage"
return 1
fi
if [[ "$GIT_AVAILABLE" == false ]]; then
_log "ERROR" "[read_head] 'git' command not found. Cannot read from commit."
return 1
fi
if [[ "$IS_GIT_REPO" == false ]]; then
_log "ERROR" "[read_head] Not inside a Git repository. Cannot read from HEAD."
return 1
fi
local git_content
local git_status
# Use process substitution to capture output and status safely
{
git_content=$(git show "HEAD:$file")
git_status=$?
} 2>/dev/null # Hide git errors from stderr
if [[ $git_status -ne 0 ]]; then
_log "ERROR" "[read_head] Failed to read '$file' from HEAD (git exit code: $git_status). File may not exist in HEAD or git error occurred."
# Output nothing on failure
return $git_status
fi
# Log success with size
local content_size=$(echo "$git_content" | wc -c)
_log "DEBUG" "[read_head] Success: Read $content_size bytes for '$file' from HEAD."
# Print the captured content to stdout
printf "%s" "$git_content"
return 0
}
# --- Extract Block Function (Renamed read_block, using head/tail) ---
# READ-ONLY. Prints extracted block from CURRENT file to stdout.
#-------------------------------------------------------------------------------
read_block() {
local file="$1"
local start_line="$2"
local end_line="$3"
local usage="Usage: read_block \"/path/to/file\" start_line end_line"
# --- Argument Validation ---
if [[ -z "$file" || -z "$start_line" || -z "$end_line" || ! "$start_line" =~ ^[1-9][0-9]*$ || ! "$end_line" =~ ^[1-9][0-9]*$ || "$start_line" -gt "$end_line" ]]; then
_log "ERROR" "[read_block] Invalid arguments. $usage (Line numbers must be positive integers, start <= end)"
return 1
fi
if [[ ! -f "$file" ]]; then
_log "ERROR" "[read_block] File not found: '$file'. $usage"
return 1
fi
if [[ ! -r "$file" ]]; then
_log "ERROR" "[read_block] File not readable: '$file'. $usage"
return 1
fi
# --- Extraction using head/tail ---
local num_lines=$((end_line - start_line + 1))
if [[ "$num_lines" -le 0 ]]; then
_log "DEBUG" "[read_block] Calculated 0 lines to extract (start=$start_line, end=$end_line). Returning empty."
echo "" # Return empty string for 0 lines
return 0
fi
local extracted_content
extracted_content=$(tail -n "+$start_line" "$file" | head -n "$num_lines")
local status=$? # Status likely reflects `head` here
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_block] Failed to extract lines $start_line-$end_line from '$file' (tail/head exit code: $status)"
return 1
fi
# Check if extracted content is empty, potentially due to lines beyond EOF
if [[ -z "$extracted_content" ]]; then
local file_lines=$(wc -l <"$file" | awk '{print $1}') # Get actual line count
if [[ "$start_line" -gt "$file_lines" ]]; then
_log "WARN" "[read_block] Start line $start_line is beyond end of file ($file_lines lines) for '$file'. Extraction resulted in empty block."
else
_log "DEBUG" "[read_block] Extracted block $start_line-$end_line from '$file' is empty."
fi
fi
# --- Debug Logging ---
local line_count=$(printf "%s" "$extracted_content" | awk 'END{print NR}')
local char_count=${#extracted_content}
_log "DEBUG" "[read_block] Success: Extracted lines $start_line-$end_line ($line_count lines, $char_count chars) from '$file'."
# --- Output to STDOUT ---
printf "%s" "$extracted_content"
return 0 # Success
}
# --- NEW: Find Line Number Function (Using grep) ---
# READ-ONLY. Prints line number of first match to stdout.
# @param {string} file - Path to the file.
# @param {string} basic_regex - Basic Regular Expression pattern.
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
find_line() {
local file="$1"
local basic_regex="$2"
local usage="Usage: find_line \"/path/to/file\" \"basic_regex\""
if [[ -z "$file" || -z "$basic_regex" ]]; then
_log "ERROR" "[find_line] Invalid arguments. $usage"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[find_line] File not found or not readable: '$file'."
return 1
fi
local line_num
line_num=$(grep -n -m 1 -e "$basic_regex" "$file" | cut -d: -f1)
local status=$?
if [[ $status -eq 0 && -n "$line_num" ]]; then
_log "DEBUG" "[find_line] Found first match for '$basic_regex' at line $line_num in '$file'."
echo "$line_num"
return 0
else
# grep returns 1 if not found, >1 for errors
_log "DEBUG" "[find_line] Pattern '$basic_regex' not found in '$file' (grep status: $status)."
return 1 # Return non-zero consistently for "not found"
fi
}
# --- NEW: Find Line Number After Function (Using tail|grep) ---
# READ-ONLY. Prints line number of first match AFTER a given line to stdout.
# @param {string} file - Path to the file.
# @param {number} start_line - Line number AFTER which to start searching.
# @param {string} basic_regex - Basic Regular Expression pattern.
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
find_line_num_after() {
local file="$1"
local start_line="$2"
local basic_regex="$3"
local usage="Usage: find_line_num_after \"/path/to/file\" start_line \"basic_regex\""
if [[ -z "$file" || -z "$start_line" || ! "$start_line" =~ ^[0-9]+$ || "$start_line" -lt 0 || -z "$basic_regex" ]]; then
_log "ERROR" "[find_line_num_after] Invalid arguments. $usage (start_line must be non-negative integer)"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[find_line_num_after] File not found or not readable: '$file'."
return 1
fi
local search_start_line=$((start_line + 1))
local found_relative_line
local grep_output
local grep_status
# Capture output and status separately
grep_output=$(tail -n "+$search_start_line" "$file" | grep -n -m 1 -e "$basic_regex")
grep_status=$?
if [[ $grep_status -eq 0 && -n "$grep_output" ]]; then
found_relative_line=$(echo "$grep_output" | cut -d: -f1)
if [[ "$found_relative_line" =~ ^[1-9][0-9]*$ ]]; then
local final_line=$((start_line + found_relative_line))
_log "DEBUG" "[find_line_num_after] Found first match for '$basic_regex' after line $start_line at line $final_line in '$file'."
echo "$final_line"
return 0
else
_log "ERROR" "[find_line_num_after] Failed to parse relative line number from grep output: '$grep_output'."
return 1
fi
else
_log "DEBUG" "[find_line_num_after] Pattern '$basic_regex' not found after line $start_line in '$file' (grep status: $grep_status)."
return 1 # Not found or error
fi
}
# --- NEW: Find Block Lines Function (Using awk) ---
# READ-ONLY. Prints start and end line numbers of block to stdout.
# @param {string} file - Path to the file.
# @param {string} start_regex - Basic Regex for the start line.
# @param {string} end_regex - Basic Regex for the end line (first match after start).
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
find_block_lines() {
local file="$1"
local start_regex="$2"
local end_regex="$3"
local usage="Usage: find_block_lines \"/path/to/file\" \"start_regex\" \"end_regex\""
if [[ -z "$file" || -z "$start_regex" || -z "$end_regex" ]]; then
_log "ERROR" "[find_block_lines] Invalid arguments. $usage"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[find_block_lines] File not found or not readable: '$file'."
return 1
fi
local block_lines
block_lines=$(awk -v start_re="$start_regex" -v end_re="$end_regex" '
$0 ~ start_re { if (!s) s=NR }
s && $0 ~ end_re { print s, NR; found_end=1; exit }
END { if (!s || !found_end) exit 1 } # Exit fail if block incomplete
' "$file")
local awk_status=$?
if [[ $awk_status -eq 0 && -n "$block_lines" ]]; then
_log "DEBUG" "[find_block_lines] Found block between '$start_regex' and '$end_regex' at lines: $block_lines in '$file'."
echo "$block_lines"
return 0
else
_log "DEBUG" "[find_block_lines] Failed to find block between '$start_regex' and '$end_regex' in '$file' (awk status: $awk_status)."
return 1
fi
}
# --- NEW: Read Before Line Function (Using head) ---
# READ-ONLY. Prints content before line number to stdout.
# @param {string} file - Path to the file.
# @param {number} line_num - Line number to read before (exclusive).
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_before_line() {
local file="$1"
local line_num="$2"
local usage="Usage: read_before_line \"/path/to/file\" line_number"
if [[ -z "$file" || -z "$line_num" || ! "$line_num" =~ ^[1-9][0-9]*$ ]]; then
_log "ERROR" "[read_before_line] Invalid arguments. $usage (line_num must be positive integer)"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[read_before_line] File not found or not readable: '$file'."
return 1
fi
local target_line=$((line_num - 1))
if [[ "$target_line" -le 0 ]]; then
_log "DEBUG" "[read_before_line] Line number <= 1 specified. Returning empty string."
echo ""
return 0
fi
head -n "$target_line" "$file"
local status=$?
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_before_line] Failed to read lines before $line_num from '$file' (head status: $status)."
return 1
fi
_log "DEBUG" "[read_before_line] Success reading lines before $line_num from '$file'."
return 0
}
# --- NEW: Read Before Regex Function (Using awk) ---
# READ-ONLY. Prints content before first line matching regex to stdout.
# @param {string} file - Path to the file.
# @param {string} basic_regex - Basic Regex for the line to stop before.
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_before_regex() {
local file="$1"
local basic_regex="$2"
local usage="Usage: read_before_regex \"/path/to/file\" \"basic_regex\""
if [[ -z "$file" || -z "$basic_regex" ]]; then
_log "ERROR" "[read_before_regex] Invalid arguments. $usage"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[read_before_regex] File not found or not readable: '$file'."
return 1
fi
awk -v pattern="$basic_regex" '$0 ~ pattern {exit} {print}' "$file"
local status=$?
# awk exits 0 even if pattern not found (prints whole file) or found immediately (prints nothing)
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_before_regex] Failed executing awk for '$file' (status: $status)."
return 1
fi
_log "DEBUG" "[read_before_regex] Success reading lines before '$basic_regex' from '$file'."
return 0
}
# --- NEW: Read After Line Function (Using tail) ---
# READ-ONLY. Prints content after line number to stdout.
# @param {string} file - Path to the file.
# @param {number} line_num - Line number to read after (exclusive).
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_after_line() {
local file="$1"
local line_num="$2"
local usage="Usage: read_after_line \"/path/to/file\" line_number"
if [[ -z "$file" || -z "$line_num" || ! "$line_num" =~ ^[0-9]+$ || "$line_num" -lt 0 ]]; then
_log "ERROR" "[read_after_line] Invalid arguments. $usage (line_num must be non-negative integer)"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[read_after_line] File not found or not readable: '$file'."
return 1
fi
local target_line=$((line_num + 1))
tail -n "+$target_line" "$file"
local status=$?
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_after_line] Failed to read lines after $line_num from '$file' (tail status: $status)."
return 1
fi
_log "DEBUG" "[read_after_line] Success reading lines after $line_num from '$file'."
return 0
}
# --- NEW: Read After Regex Function (Using awk) ---
# READ-ONLY. Prints content after first line matching regex to stdout.
# @param {string} file - Path to the file.
# @param {string} basic_regex - Basic Regex for the line to start after.
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_after_regex() {
local file="$1"
local basic_regex="$2"
local usage="Usage: read_after_regex \"/path/to/file\" \"basic_regex\""
if [[ -z "$file" || -z "$basic_regex" ]]; then
_log "ERROR" "[read_after_regex] Invalid arguments. $usage"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[read_after_regex] File not found or not readable: '$file'."
return 1
fi
awk -v pattern="$basic_regex" 'found {print} $0 ~ pattern {found=1}' "$file"
local status=$?
# awk exits 0 even if pattern not found (prints nothing)
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_after_regex] Failed executing awk for '$file' (status: $status)."
return 1
fi
_log "DEBUG" "[read_after_regex] Success reading lines after '$basic_regex' from '$file'."
return 0
}
# --- NEW: Read Between Lines Function (Using tail|head) ---
# READ-ONLY. Prints content between lines (inclusive) to stdout.
# @param {string} file - Path to the file.
# @param {number} start_line - Start line number (inclusive).
# @param {number} end_line - End line number (inclusive).
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_between_lines() {
# This is essentially the same implementation as the simplified read_block
read_block "$@" # Delegate to read_block
return $? # Return status of read_block
}
# --- NEW: Read Between Regex Function (Using awk) ---
# READ-ONLY. Prints content between lines matching regex (inclusive default) to stdout.
# @param {string} file - Path to the file.
# @param {string} start_regex - Basic Regex for start line.
# @param {string} end_regex - Basic Regex for end line.
# @param {string} [--exclusive] - Optional flag for exclusive range.
# @returns {number} Exit code 0 on success, non-zero on failure.
#-------------------------------------------------------------------------------
read_between_regex() {
local file="$1"
local start_regex="$2"
local end_regex="$3"
local mode="inclusive" # Default
local usage="Usage: read_between_regex \"file\" \"start_regex\" \"end_regex\" [--exclusive]"
if [[ "$4" == "--exclusive" ]]; then
mode="exclusive"
elif [[ -n "$4" ]]; then
_log "ERROR" "[read_between_regex] Invalid fourth argument. Only '--exclusive' allowed. $usage"
return 1
fi
if [[ -z "$file" || -z "$start_regex" || -z "$end_regex" ]]; then
_log "ERROR" "[read_between_regex] Invalid arguments. $usage"
return 1
fi
if [[ ! -f "$file" || ! -r "$file" ]]; then
_log "ERROR" "[read_between_regex] File not found or not readable: '$file'."
return 1
fi
local awk_script
if [[ "$mode" == "inclusive" ]]; then
awk_script='
$0 ~ start_re { p = 1 }
p { print }
$0 ~ end_re { p = 0 }
'
else # exclusive
awk_script='
$0 ~ end_re { p = 0 }
p { print }
$0 ~ start_re { p = 1 }
'
fi
awk -v start_re="$start_regex" -v end_re="$end_regex" "$awk_script" "$file"
local status=$?
if [[ $status -ne 0 ]]; then
_log "ERROR" "[read_between_regex] Failed executing awk for '$file' (status: $status)."
return 1
fi
_log "DEBUG" "[read_between_regex] Success reading lines between '$start_regex' and '$end_regex' ($mode) from '$file'."
return 0
}
# ==============================================================================
# --- Public Modification Helper Functions (Updated Error Logging) ---
# ==============================================================================
# --- Delete Line Function ---
del_line() {
local file="$1"
local line_num="$2"
local action_msg="Deleting line $line_num from \`$file\`"
if [[ -z "$file" || -z "$line_num" || ! "$line_num" =~ ^[1-9][0-9]*$ ]]; then # Must be 1 or greater
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: del_line \"/path/to/file\" line_number (must be positive integer >= 1)"
# Context logging before exit
_log_context_on_error "$file" "$line_num" "del_line error"
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found: $file"
exit 1 # No context to show
fi
_backup_file_internal "$file" || exit 1 # Exit if backup fails
# Check line number bounds before sed
local total_lines=$(wc -l <"$file" | awk '{print $1}')
if [[ "$line_num" -gt "$total_lines" ]]; then
_md_li "$action_msg (Line number out of bounds)" 1
_log "ERROR" "Line number $line_num is greater than total lines ($total_lines) in '$file'."
_log_context_on_error "$file" "$line_num" "del_line error"
exit 1
fi
sed -i "${line_num}d" "$file"
local status=$?
_md_li "$action_msg" $status
_log "INFO" "del_line status: $status for $file:$line_num"
if [[ $status -ne 0 ]]; then
_log_context_on_error "$file" "$line_num" "del_line error"
exit 1 # Trigger trap on failure
fi
return $status
}
# --- Replace Line Function ---
# Replaces lines exactly matching a regex with new content
repl_line() {
local file="$1"
local match_regex="$2" # POSIX ERE expected by awk
local new_line_content="$3"
local action_msg="Replacing line matching regex '$match_regex' in \`$file\`"
if [[ -z "$file" || -z "$match_regex" ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: repl_line \"/path/to/file\" \"match_regex\" \"new_line_content\""
# Context: Maybe show first few lines or full file if small? Too complex for now.
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found: $file"
exit 1
fi
_backup_file_internal "$file" || exit 1
local temp_file="${TEMP_DIR}/repl_line_tmp_$(basename "$file")_$$"
local awk_status=0
local mv_status=0
_log "DEBUG" "Regex passed to awk: '$match_regex'"
awk -v pattern="$match_regex" -v newl="$new_line_content" '$0 ~ pattern { print newl; next } { print }' "$file" >"$temp_file"
awk_status=$?
# (File existence check logic remains same as v4.3)
if [[ $awk_status -eq 0 ]]; then
if [[ ! -f "$temp_file" ]]; then
if grep -qE "$match_regex" "$file"; then
_log "ERROR" "awk succeeded and match exists, but temp file '$temp_file' not created for repl_line in $file"
awk_status=1
elif [[ ! -s "$file" ]]; then
_log "DEBUG" "repl_line: Original file empty and no match found. Temp file correctly empty."
touch "$temp_file"
else
_log "DEBUG" "repl_line: awk succeeded, no match found, temp file not created (as expected)."
cp "$file" "$temp_file" || awk_status=$?
fi
fi
fi
if [[ $awk_status -eq 0 ]]; then
mv "$temp_file" "$file"
mv_status=$?
if [[ $mv_status -ne 0 ]]; then
_log "ERROR" "Failed to move temp file '$temp_file' to '$file'"
rm "$temp_file" 2>/dev/null
fi
else
_log "ERROR" "awk command failed (exit code: $awk_status) for repl_line in $file"
# Context: Show lines matching the regex that awk failed on?
local matching_lines=$(grep -nE "$match_regex" "$file" || true) # Get matching lines with numbers
if [[ -n "$matching_lines" ]]; then
_log "DEBUG" "Lines potentially matching '$match_regex' in '$file' during repl_line failure:"
echo "$matching_lines" | while IFS= read -r line; do _log "DEBUG" " $line"; done
fi
rm "$temp_file" 2>/dev/null
mv_status=1 # Ensure overall failure reflects awk failure
fi
local overall_status=$((awk_status || mv_status))
_md_li "$action_msg" $overall_status
_log "INFO" "repl_line status: $overall_status for $file"
[[ $overall_status -ne 0 ]] && exit 1
return $overall_status
}
# --- Substitute In Line Function ---
# (Add contextual logging on sed failure)
sub_in_line() {
local file="$1"
local match_pattern="$2"
local replacement_text="$3"
local delimiter="${4:-|}" # Use '|' as default delimiter, allow override
local sed_opts="${5}" # Optional extra sed options (e.g., 'g' for global)
local action_msg="Substituting pattern '$match_pattern' with '$replacement_text' in \`$file\`"
# Basic validation
if [[ -z "$file" || -z "$match_pattern" ]]; then
_md_li "$action_msg (Invalid args - missing file or pattern)" 1
_log "ERROR" "Usage: sub_in_line \"file\" \"match_pattern\" \"replacement_text\" [delimiter] [sed_options]"
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found: $file"
exit 1
fi
# Check if delimiter exists in pattern or replacement (crude check)
if [[ "$match_pattern" == *"$delimiter"* || "$replacement_text" == *"$delimiter"* ]]; then
_md_li "$action_msg (Delimiter collision)" 1
_log "ERROR" "Delimiter '$delimiter' found in pattern or replacement. Choose a different delimiter."
exit 1
fi
_backup_file_internal "$file" || exit 1
local sed_script="s${delimiter}${match_pattern}${delimiter}${replacement_text}${delimiter}${sed_opts}"
_log "DEBUG" "Executing sed command: sed -i \"$sed_script\" \"$file\""
sed -i "$sed_script" "$file"
local status=$?
_md_li "$action_msg" $status
_log "INFO" "sub_in_line status: $status for $file"
if [[ $status -ne 0 ]]; then
# Context: Show lines matching the pattern that sed failed on?
local matching_lines=$(grep -nF "$match_pattern" "$file" || true) # Use grep -F for fixed string match if pattern is simple
if [[ -z "$matching_lines" ]]; then # Fallback to regex grep if fixed string not found
matching_lines=$(grep -nE "$match_pattern" "$file" || true)
fi
if [[ -n "$matching_lines" ]]; then
_log "DEBUG" "Lines potentially containing '$match_pattern' in '$file' during sub_in_line failure:"
echo "$matching_lines" | while IFS= read -r line; do _log "DEBUG" " $line"; done
fi
exit 1 # Trigger trap on failure
fi
return $status
}
# --- Insert Line After Function ---
# (Add contextual logging on awk/mv failure)
ins_after() {
local file="$1"
local line_num="$2"
local content_block="$3" # Can be multi-line
local action_msg="Inserting content after line $line_num in \`$file\`"
if [[ -z "$file" || -z "$line_num" || ! "$line_num" =~ ^[0-9]+$ || "$line_num" -lt 0 ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: ins_after \"/path/to/file\" line_number \"\$content\" (line_number >= 0)"
_log_context_on_error "$file" "$line_num" "ins_after error" # Log context even for invalid args if file exists
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found: $file"
exit 1
fi
_backup_file_internal "$file" || exit 1
local temp_file="${TEMP_DIR}/ins_after_tmp_$(basename "$file")_$$"
local awk_status=0
local mv_status=0
if [[ "$line_num" -eq 0 ]]; then
awk -v text="$content_block" 'BEGIN { print text } { print }' "$file" >"$temp_file"
awk_status=$?
else
# Check line number bounds before awk
local total_lines=$(wc -l <"$file" | awk '{print $1}')
if [[ "$line_num" -gt "$total_lines" ]]; then
_md_li "$action_msg (Line number out of bounds)" 1
_log "ERROR" "Line number $line_num is greater than total lines ($total_lines) in '$file'."
_log_context_on_error "$file" "$line_num" "ins_after error"
exit 1
fi
awk -v line="$line_num" -v text="$content_block" '{ print } NR == line { print text }' "$file" >"$temp_file"
awk_status=$?
fi
# (File existence check logic remains same as v4.3)
if [[ $awk_status -eq 0 ]]; then
if [[ ! -f "$temp_file" ]]; then
if [[ ! -s "$file" ]]; then
if [[ "$line_num" -eq 0 ]]; then
_log "DEBUG" "ins_after: Original empty, inserting at line 0. Recreating temp file with content."
printf "%s" "$content_block" >"$temp_file" || awk_status=$?
else
_log "DEBUG" "ins_after: Original empty, cannot insert after line $line_num > 0. Temp file correctly empty."
touch "$temp_file"
fi
else
_log "ERROR" "awk command succeeded but temp file '$temp_file' not created for ins_after in non-empty file $file"
awk_status=1
fi
fi
fi
if [[ $awk_status -eq 0 ]]; then
mv "$temp_file" "$file"
mv_status=$?
if [[ $mv_status -ne 0 ]]; then
_log "ERROR" "Failed to move temp file '$temp_file' to '$file'"
rm "$temp_file" 2>/dev/null
fi
else
_log "ERROR" "awk command failed (exit code: $awk_status) for ins_after in $file"
_log_context_on_error "$file" "$line_num" "ins_after awk error" # Log context on awk fail
rm "$temp_file" 2>/dev/null
mv_status=1
fi
local overall_status=$((awk_status || mv_status))
_md_li "$action_msg" $overall_status
_log "INFO" "ins_after status: $overall_status for $file"
if [[ $overall_status -ne 0 ]]; then
# Context logging added here if mv failed
_log_context_on_error "$file" "$line_num" "ins_after mv error"
exit 1
fi
return $overall_status
}
# --- Replace Between Contexts Function ---
# (Updated END block for contextual logging on failure)
repl_between() {
local file="$1"
local start_context="$2"
local end_context="$3"
local new_content="$4"
local action_msg="Replacing content between markers in \`$file\`"
local usage="Usage: repl_between \"file\" \"\$start_context\" \"\$end_context\" \"\$new_content\""
# --- Argument Validation ---
if [[ -z "$file" || -z "$start_context" || -z "$end_context" ]]; then
_md_li "$action_msg (Invalid args - missing file or context)" 1
_log "ERROR" "$usage"
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found: $file"
exit 1
fi
_backup_file_internal "$file" || exit 1
# --- Normalize Contexts ---
local norm_start=$(_normalize_text "$start_context")
local norm_end=$(_normalize_text "$end_context")
if [[ -z "$norm_start" || -z "$norm_end" ]]; then
_md_li "$action_msg (Invalid context - empty after normalization)" 1
_log "ERROR" "Start or end context is effectively empty after removing whitespace. Cannot proceed."
# Add context logging before exiting
_log "DEBUG" "[repl_between] Context log (file start) due to invalid markers:"
head -n 5 "$file" | while IFS= read -r line; do _log "DEBUG" " $line"; done
exit 1
fi
_log "DEBUG" "Normalized Start Context: '$norm_start'"
_log "DEBUG" "Normalized End Context: '$norm_end'"
# --- Perform Replacement using awk ---
local temp_file="${TEMP_DIR}/repl_between_tmp_$(basename "$file")_$$"
local awk_status=0
local mv_status=0
local awk_error_output=""
# awk script refactored for clarity and robustness
# Capture stderr to check for explicit error messages from awk script
awk_error_output=$(awk -v n_start="$norm_start" \
-v n_end="$norm_end" \
-v new_content="$new_content" \
'
function normalize(s) {
gsub(/[[:space:]]+/, "", s);
return s;
}
BEGIN {
state = "LOOKING_FOR_START"; # LOOKING_FOR_START, PRINTING_REPLACEMENT, LOOKING_FOR_END, AFTER_END
start_found = 0;
end_found = 0;
buffer = "";
match_pos = 0;
start_line_num = -1;
end_line_num = -1;
buffer_start_line = 1; # Track the original line number corresponding to the start of the buffer
}
{
current_line = $0;
orig_lines[NR] = current_line;
# Manage buffer - keep it reasonably sized to avoid excessive memory use
# If buffer gets too large without finding start, trim from beginning
MAX_BUFFER_LINES = 1000; # Adjust as needed
if (NR > buffer_start_line + MAX_BUFFER_LINES && state == "LOOKING_FOR_START") {
buffer_start_line++;
# Complex buffer trimming logic needed here - for now, just warn if buffer gets large
if (NR % 500 == 0) { print "WARN: [repl_between] Buffer growing large, consider more unique context." > "/dev/stderr"; }
}
norm_line = normalize(current_line);
buffer = buffer norm_line;
if (state == "LOOKING_FOR_START") {
match_pos = index(buffer, n_start);
if (match_pos > 0) {
start_found = 1;
start_line_num = NR; # Line where the *end* of the start context was found
# Print all original lines stored up to and including this line
for (i = 1; i <= NR; ++i) {
if (orig_lines[i] != "") print orig_lines[i]; # Check if line exists (sparse array safe)
}
print new_content;
state = "LOOKING_FOR_END";
buffer = ""; # Reset buffer for finding end context
buffer_start_line = NR + 1; # Next line starts new buffer
}
}
else if (state == "LOOKING_FOR_END") {
match_pos = index(buffer, n_end);
if (match_pos > 0) {
end_found = 1;
end_line_num = NR; # Line where the *end* of the end context was found
# Print the line containing the end marker
print current_line;
state = "AFTER_END";
buffer = ""; # Reset buffer
buffer_start_line = NR + 1;
}
# If end not found, line is implicitly skipped
}
else if (state == "AFTER_END") {
print current_line;
}
}
END {
# Contextual logging moved here
context_lines = 3; # How many lines before/after to show
if (!start_found) {
print "ERROR: [repl_between] Start context marker not found in file." > "/dev/stderr";
start_context_line = NR > context_lines ? NR - context_lines : 1;
print "DEBUG: Context near end of file:" > "/dev/stderr";
for (l=start_context_line; l<=NR; ++l) { if (orig_lines[l] != "") print "DEBUG: L" l ": " orig_lines[l] > "/dev/stderr"; }
exit 1;
}
if (!end_found) {
print "ERROR: [repl_between] End context marker not found after start marker (EOF reached)." > "/dev/stderr";
start_context_line = start_line_num > context_lines ? start_line_num - context_lines : 1;
# Show context from where start was found to end of file (or N lines after start)
end_context_line = NR < start_line_num + context_lines ? NR : start_line_num + context_lines;
print "DEBUG: Context near where start marker was found (line " start_line_num "):" > "/dev/stderr";
for (l=start_context_line; l<=end_context_line; ++l) { if (orig_lines[l] != "") print "DEBUG: L" l ": " orig_lines[l] > "/dev/stderr"; }
exit 1;
}
}
' "$file" >"$temp_file" 2>&1) # Capture stderr
awk_status=$?
# Check Status and Replace Original (same logic as before)
if [[ $awk_status -ne 0 || "$awk_error_output" == *"ERROR:"* ]]; then
_log "ERROR" "awk command failed or reported error for repl_between in $file (exit code: $awk_status)."
if [[ -n "$awk_error_output" ]]; then
# Print captured stderr from awk for debugging
echo "$awk_error_output" | while IFS= read -r line; do _log "DEBUG" " $line"; done
fi
[[ $awk_status -eq 0 ]] && awk_status=1
rm "$temp_file" 2>/dev/null
else
if [[ ! -f "$temp_file" ]]; then
_log "ERROR" "awk command succeeded but temp file '$temp_file' not created for repl_between in $file."
awk_status=1
fi
fi
# Move Temp to Original if successful (same logic as before)
if [[ $awk_status -eq 0 ]]; then
mv "$temp_file" "$file"
mv_status=$?
if [[ $mv_status -ne 0 ]]; then
_log "ERROR" "Failed to move temp file '$temp_file' to '$file'"
# Context logging added here if mv failed (using last known good state?) - Difficult
_log "DEBUG" "[repl_between] mv failed. Context logging might reflect post-awk state if temp file exists."
_log_context_on_error "$file" 0 "repl_between mv error" # Show start of file
rm "$temp_file" 2>/dev/null
fi
else
mv_status=1 # Ensure overall status reflects awk failure
fi
# Report and Exit (same logic as before)
local overall_status=$((awk_status || mv_status))
_md_li "$action_msg" $overall_status
_log "INFO" "repl_between status: $overall_status for $file"
[[ $overall_status -ne 0 ]] && exit 1
return $overall_status
}
# --- Overwrite File Function ---
# (Add contextual logging on write failure)
write_file() {
local file="$1"
local content="$2"
local action_msg="Overwriting file \`$file\`"
if [[ -z "$file" ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: write_file \"/path/to/file\" \"\$content_variable\""
exit 1
fi
_backup_file_internal "$file" || exit 1
local parent_dir=$(dirname "$file")
if [[ "$parent_dir" != "." && ! -d "$parent_dir" ]]; then
_log "INFO" "Parent directory '$parent_dir' does not exist for write_file target '$file'. Attempting creation..."
mkdir -p "$parent_dir"
if [[ $? -ne 0 ]]; then
_md_li "$action_msg (Failed to create parent dir)" 1
_log "ERROR" "Failed to create parent directory '$parent_dir' for '$file'."
exit 1
fi
fi
# Use temp file for atomic write/overwrite
local temp_file="${TEMP_DIR}/write_file_tmp_$(basename "$file")_$$"
printf "%s" "$content" >"$temp_file"
local write_status=$?
local mv_status=0
if [[ $write_status -eq 0 ]]; then
mv "$temp_file" "$file"
mv_status=$?
if [[ $mv_status -ne 0 ]]; then
_log "ERROR" "Failed to move temp file '$temp_file' over '$file'."
rm "$temp_file" 2>/dev/null
fi
else
_log "ERROR" "Failed to write content to temp file '$temp_file' for '$file'."
rm "$temp_file" 2>/dev/null
mv_status=1 # Reflect write failure
fi
local overall_status=$((write_status || mv_status))
if [[ $overall_status -eq 0 && ! -v BACKUP_MAP["$file"] ]]; then
_track_created_file_internal "$file"
fi
_md_li "$action_msg" $overall_status
_log "INFO" "write_file status: $overall_status for $file"
if [[ $overall_status -ne 0 ]]; then
# Log context from original file (backup) if available
if [[ -v BACKUP_MAP["$file"] ]]; then
_log "DEBUG" "[write_file] Error occurred. Context from original (backup) file:"
head -n 5 "${BACKUP_MAP[$file]}" | while IFS= read -r line; do _log "DEBUG" " $line"; done
else
_log "DEBUG" "[write_file] Error occurred, no backup found for context."
fi
exit 1
fi
return $overall_status
}
# --- Create File Function ---
# (Add contextual logging on write/mkdir failure)
new_file() {
local file="$1"
local content="$2"
local action_msg="Creating file \`$file\`"
if [[ -z "$file" ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: new_file \"/path/to/new_file\" \"\$content_variable\""
exit 1
fi
if [[ -e "$file" ]]; then
_md_li "$action_msg (Path already exists)" 1
_log "ERROR" "Path already exists: $file."
exit 1
fi
local parent_dir=$(dirname "$file")
if [[ "$parent_dir" != "." && ! -d "$parent_dir" ]]; then
_log "INFO" "Parent directory '$parent_dir' does not exist. Creating..."
mkdir -p "$parent_dir"
local mkdir_status=$?
if [[ $mkdir_status -ne 0 ]]; then
_md_li "$action_msg (Failed to create parent dir)" 1
_log "ERROR" "Failed to create parent directory '$parent_dir' for '$file'."
exit 1
fi
fi
printf "%s" "$content" >"$file"
local write_status=$?
if [[ $write_status -eq 0 ]]; then
_track_created_file_internal "$file" # Track only on success
else
_log "ERROR" "Failed to write content to new file '$file'."
[[ -f "$file" ]] && rm "$file" 2>/dev/null # Attempt cleanup
fi
_md_li "$action_msg" $write_status
_log "INFO" "new_file status: $write_status for $file"
[[ $write_status -ne 0 ]] && exit 1
return $write_status
}
# --- Delete File Function ---
# (Add contextual logging on rm failure)
del_file() {
local file="$1"
local action_msg="Deleting file \`$file\`"
if [[ -z "$file" ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: del_file \"/path/to/existing_file\""
exit 1
fi
if [[ ! -f "$file" ]]; then
_md_li "$action_msg (File not found)" 1
_log "ERROR" "File not found or is not a regular file: $file."
exit 1
fi
_backup_file_internal "$file" || exit 1
rm "$file"
local status=$?
_md_li "$action_msg" $status
_log "INFO" "del_file status: $status for $file"
if [[ $status -ne 0 ]]; then
_log "ERROR" "Failed to delete file '$file'."
# Context is less useful here, but log error
exit 1
fi
return $status
}
# --- Move File Function ---
# (Add contextual logging on failure points)
mv_file() {
local source_path="$1"
local dest_path="$2"
local action_msg="Moving file \`$source_path\` to \`$dest_path\`"
if [[ -z "$source_path" || -z "$dest_path" || "$source_path" == "$dest_path" ]]; then
_md_li "$action_msg (Invalid args)" 1
_log "ERROR" "Usage: mv_file \"/source\" \"/destination\" (paths cannot be empty or identical)"
exit 1
fi
if [[ ! -f "$source_path" ]]; then
_md_li "$action_msg (Source not found)" 1
_log "ERROR" "Source file not found: $source_path."
exit 1
fi
if [[ -e "$dest_path" ]]; then
_md_li "$action_msg (Destination exists)" 1
_log "ERROR" "Destination path already exists: $dest_path."
exit 1
fi
local dest_parent_dir=$(dirname "$dest_path")
if [[ "$dest_parent_dir" != "." && ! -d "$dest_parent_dir" ]]; then
_log "INFO" "Destination parent directory '$dest_parent_dir' does not exist. Creating..."
mkdir -p "$dest_parent_dir"
local mkdir_status=$?
if [[ $mkdir_status -ne 0 ]]; then
_md_li "$action_msg (Failed to create dest parent dir)" 1
_log "ERROR" "Failed to create destination parent directory '$dest_parent_dir'."
exit 1
fi
fi
_backup_file_internal "$source_path" || exit 1
local cp_status=0
local track_status=0
local rm_status=0
cp "$source_path" "$dest_path"
cp_status=$?
if [[ $cp_status -eq 0 ]]; then
_track_created_file_internal "$dest_path"
track_status=$?
if [[ $track_status -eq 0 ]]; then
rm "$source_path"
rm_status=$?
if [[ $rm_status -ne 0 ]]; then
_log "ERROR" "Move failed: Could not remove source '$source_path' after copying."
rm "$dest_path" 2>/dev/null
local temp_tracker=()
for item in "${CREATED_FILES_TRACKER[@]}"; do [[ "$item" != "$dest_path" ]] && temp_tracker+=("$item"); done
CREATED_FILES_TRACKER=("${temp_tracker[@]}")
fi
else
_log "ERROR" "Move failed: Could not track destination '$dest_path'."
rm "$dest_path" 2>/dev/null
fi
else
_log "ERROR" "Move failed: Could not copy '$source_path' to '$dest_path'."
# Context: show source file context?
_log_context_on_error "$source_path" 0 "mv_file copy error"
fi
local overall_status=$((cp_status || track_status || rm_status))
_md_li "$action_msg" $overall_status
_log "INFO" "mv_file status: $overall_status for $source_path -> $dest_path (cp: $cp_status, track: $track_status, rm: $rm_status)"
if [[ $overall_status -ne 0 ]]; then
# Context logging added here if any step failed
if [[ $cp_status -ne 0 ]]; then
_log_context_on_error "$source_path" 0 "mv_file copy error"
elif [[ $track_status -ne 0 ]]; then
_log "DEBUG" "[mv_file] Context logging skipped for tracking error." # Less useful context
elif [[ $rm_status -ne 0 ]]; then
_log "DEBUG" "[mv_file] Source remove failed. Context from backup:"
if [[ -v BACKUP_MAP["$source_path"] ]]; then
head -n 5 "${BACKUP_MAP[$source_path]}" | while IFS= read -r line; do _log "DEBUG" " $line"; done
fi
fi
exit 1
fi
return $overall_status
}
# --- Run Prettier Function ---
# (Added extraction of error message/line for Markdown report)
#-------------------------------------------------------------------------------
run_prettier() {
_md_hr
_md_h3 "Formatting Phase (Prettier)"
if [[ $# -eq 0 ]]; then
_md_li_simple "No files specified for formatting."
return 0
fi
if ! command -v npx &>/dev/null; then
_md_warn_report "Prettier: 'npx' command not found. Skipping formatting."
return 0
fi
_md_li_simple "Running Prettier individually on ${#} file(s)..."
local failures_occurred=0
local file_count=0
local total_files=$#
local failed_files_summary=() # Store details of failed files
for file in "$@"; do
((file_count++))
local file_action_msg="[${file_count}/${total_files}] Formatting \`$file\`"
if [[ ! -f "$file" ]]; then
_md_li_simple "$file_action_msg ... **SKIPPED** (File not found)"
continue
fi
local is_tracked=false
if [[ "$IS_GIT_REPO" == true ]]; then
if git ls-files --error-unmatch "$file" >/dev/null 2>&1; then
is_tracked=true
_log "DEBUG" "File '$file' is tracked by Git."
else
_log "DEBUG" "File '$file' is not tracked by Git."
is_tracked=false
fi
fi
local prettier_output
local prettier_status=0
if prettier_output=$(npx prettier --write --ignore-unknown --log-level warn "$file" 2>&1); then
prettier_status=0
_md_li "$file_action_msg" $prettier_status
if [[ "$is_tracked" == true ]]; then
if ! git diff --quiet "$file"; then
_log "DEBUG" "File '$file' has unstaged changes after successful formatting."
fi
fi
else
prettier_status=$?
((failures_occurred++))
_md_li "$file_action_msg" $prettier_status
_log "WARN" "Prettier failed for $file (status: $prettier_status). Output:"
echo "$prettier_output" | sed 's/^/ /' >&2
# Extract error details for summary
local error_line=$(echo "$prettier_output" | grep -oP ':\K\d+(?::\d+)?\] SyntaxError:' | head -n 1 || echo "$prettier_output" | grep -oP 'SyntaxError:.*?\((\d+):\d+\)' | head -n 1 | sed -n 's/.*(\([0-9]*\):.*/\1/p' || echo "?")
local error_msg=$(echo "$prettier_output" | grep -oP 'SyntaxError: .*' | head -n 1 || echo "Unknown Prettier error")
failed_files_summary+=(" - \`$file\` (Line ~$error_line): $error_msg")
if [[ "$is_tracked" == true ]]; then
_log "INFO" "Attempting Git revert of WORKING DIRECTORY changes for tracked file '$file' (to index state)..."
if git restore "$file" 2>/dev/null; then
_log "INFO" "Successfully reverted working directory changes for '$file' using 'git restore'."
elif git checkout -- "$file" 2>/dev/null; then
_log "INFO" "Successfully reverted working directory changes for '$file' using 'git checkout --'."
else
local git_revert_fail_code=$?
_log "ERROR" "FAILED to revert working directory changes for '$file' using Git (exit code: $git_revert_fail_code). Manual check needed."
fi
else
_log "WARN" "Cannot revert '$file' using Git; it is not tracked by Git."
local was_created_by_script=false
for created in "${CREATED_FILES_TRACKER[@]}"; do
if [[ "$created" == "$file" ]]; then
was_created_by_script=true
break
fi
done
if [[ "$was_created_by_script" == true ]]; then
_log "INFO" "'$file' was created by this script run. It will be removed by the main cleanup routine if the script exits due to this failure."
fi
fi
continue
fi
done # End loop
# --- Prettier Summary ---
if [[ $failures_occurred -gt 0 ]]; then
_md_warn_report "Prettier processing finished with failure(s). Git revert attempted for failed tracked files (check logs)."
# Print summary of failures
if [[ ${#failed_files_summary[@]} -gt 0 ]]; then
_md_note "Prettier Errors Summary:"
printf "%s\n" "${failed_files_summary[@]}"
fi
else
_md_li_simple "Prettier processing finished successfully."
fi
diff_to_head
return $failures_occurred
} # end run_prettier
# Expose logger and log functions (still point to _log for stderr)
function logger() { _log "$@"; }
function log() { _log "$@"; }
# --- Diff to HEAD Function ---
# (No changes from previous version - diff_to_head)
diff_to_head() {
_md_hr
_md_h3 "Working Directory Status vs. INDEX/HEAD (Concise, New Files Listed)"
# Use global flags
if [[ "$GIT_AVAILABLE" == false ]]; then
_md_warn_report "Cannot report status vs Git ('git' command not found)."
return 0 # Non-fatal, just skip
fi
if [[ "$IS_GIT_REPO" == false ]]; then
_md_warn_report "Cannot report status vs Git (not in a Git repository)."
return 0 # Non-fatal, just skip
fi
local tracked_diff_output tracked_diff_status
local untracked_files staged_new_files all_new_files
local has_tracked_changes=false
local has_new_files=false
local overall_errors=0
# --- 1. Get Diff for Files Changed Since INDEX (Unstaged changes) ---
_md_note "**Unstaged Changes (vs Index/Staged state):**"
# Get the raw diff compared to the index (staged area)
# This shows changes made by the script that weren't reverted
tracked_diff_output=$(git diff --patch --no-color --no-prefix 2>&1)
tracked_diff_status=$?
if [[ $tracked_diff_status -ne 0 && "$tracked_diff_output" == *"fatal:"* ]]; then
# Handle fatal error from git diff itself
_md_codeblock_start text
printf "Error running git diff for unstaged changes:\n%s\n" "$tracked_diff_output"
_md_codeblock_end
SCRIPT_ERRORS_OCCURRED=1
overall_errors=1
elif [[ -n "$tracked_diff_output" ]]; then
has_tracked_changes=true
_md_codeblock_start "diff"
printf "%s\n" "$tracked_diff_output" # Ensure final newline
_md_codeblock_end
fi
# Report if no *unstaged* changes were found
if [[ "$has_tracked_changes" == false && "$overall_errors" -eq 0 ]]; then
_md_li_simple "**No unstaged changes found.** This means your script execution **DID NOT** modify any file successfully. If file you are editing is below 200 LOC, overwrite it in full to simplify your next attempt."
fi
# --- 2. Identify ALL New Files (Untracked + Staged New) ---
# This gives context about files that might have been added before or during the script
_md_note "**New Files Added (Staged or Untracked):**"
# Get untracked files
mapfile -t untracked_files < <(git ls-files --others --exclude-standard)
if [[ $? -ne 0 && ! -z "$(git ls-files --others --exclude-standard 2>&1)" ]]; then # Check for actual error output
_md_warn_report "Error listing untracked files: $(git ls-files --others --exclude-standard 2>&1)"
SCRIPT_ERRORS_OCCURRED=1
overall_errors=1
fi
# Get newly added staged files
mapfile -t staged_new_files < <(git diff --cached --name-only --diff-filter=A)
if [[ $? -ne 0 && ! -z "$(git diff --cached --name-only --diff-filter=A 2>&1)" ]]; then
_md_warn_report "Error listing staged new files: $(git diff --cached --name-only --diff-filter=A 2>&1)"
SCRIPT_ERRORS_OCCURRED=1
overall_errors=1
fi
# Combine lists
all_new_files=("${untracked_files[@]}" "${staged_new_files[@]}")
if [[ ${#all_new_files[@]} -gt 0 ]]; then
has_new_files=true
# Sort and unique for cleaner output
local sorted_unique_new_files
mapfile -t sorted_unique_new_files < <(printf "%s\n" "${all_new_files[@]}" | sort -u)
_md_codeblock_start "text"
# Ensure final newline from printf
printf "%s\n" "${sorted_unique_new_files[@]}"
_md_codeblock_end
else
_md_li_simple "*No new files found.*"
fi
# --- Overall Summary ---
_md_hr
if [[ "$overall_errors" -ne 0 ]]; then
_md_li_simple "Overall: Errors occurred during status check."
return 1
elif [[ "$has_tracked_changes" == false && "$has_new_files" == false && -z "$(git diff --cached --name-only)" ]]; then
# No unstaged, no new, and no staged changes either
_md_li_simple "Overall: Working directory matches HEAD and is clean."
return 0
elif [[ "$has_tracked_changes" == false && "$has_new_files" == false ]]; then
# No unstaged, no new, but potentially staged changes exist from before script run
_md_li_simple "Overall: No unstaged changes remain; working directory matches index (staged state)."
return 0
else
_md_li_simple "Overall: Working directory has unstaged changes or new files."
return 0
fi
}
# Command Execution Wrapper
# Executes a command, checks status, logs command and exits on failure.
# Useful for chaining read/find operations before mod_start.
#-------------------------------------------------------------------------------
try_run() {
# Command and its arguments are passed directly
local cmd_output
local cmd_status
_log "TRY" "$@"
# Execute the command, capturing stdout and status
cmd_output=$("$@")
cmd_status=$?
if [[ $cmd_status -ne 0 ]]; then
# Log only the failed command and its exit code on failure
log "ERROR" "Command failed (Exit Code: $cmd_status): $*"
# Context logging should ideally happen in the failed underlying command
exit 1 # Exit the main script
fi
# If successful, print the captured output to stdout for assignment
printf "%s" "$cmd_output"
return 0
}
# --- END OF SCRIPT v5.0 ---

Your Role: You are an intelligent Bash script generator. Your task is to analyze provided code context (dump.*.txt for HEAD state, latest script run outputs showing uncommitted changes or errors) and a Goal Description to produce a refactor.sh script that performs the requested code modifications.

Key Requirement: The generated refactor.sh script MUST leverage the companion utility script utils.sh (v5.0+, assumed to be in the same directory) for setup, verification, accessing file content states, file backups/tracking, modification execution via helper functions, error rollback with contextual logging, and targeted Prettier execution. Your generated script should be concise and focus on efficiently calling the appropriate utils.sh functions.

Understanding File States & Choosing Your Base:

  • The Challenge: The file on disk might differ from reference states due to uncommitted changes.
  • Your Tools: utils.sh provides functions to get specific versions:
    • read_curr <file>: Gets the content as it currently exists on the disk (working directory). Use this for reading content you intend to modify.
    • read_head <file>: Gets the content from the HEAD Git commit. Use this only if you need to revert to or reference the last known-good committed state.
  • Default: Base all modifications on the current disk content using read_curr unless explicitly told otherwise.

Input Requirements:

  1. This System Prompt.
  2. dump.*.txt Content: git HEAD state
  3. (Optional) Previous Script Output: git diff output (showing unstaged changes) or error logs (including contextual logs from utils.sh on failure). Use this to understand the current state and why previous attempts failed.
  4. Goal Description: The specific refactoring task.

Output Requirements:

  • Start each reply with a brief summary (bullet points) of the script's goal and the primary modification strategy (e.g., "Content construction using find/read/write", "Simple block replace using repl_between", "Line deletion using find_line/del_line").
  • In a bash codeblock: Generate a single, complete, executable refactor.sh script.
  • Output only the script.

Best Practices (Workflow - Read Carefully):

  1. ALWAYS Use utils.sh Functions: For setup (setup_op), modifications (mod_start/mod_end), content access (read_*, find_*), and file operations (write_file, new_file, etc.).

  2. Error Checks:

    • Outside mod_start/mod_end: Keep explicit error checks (if [[ $? -ne 0 ]]... exit 1) after read-only operations (read_*, find_*) used during content preparation. Log errors clearly before exiting.
    • Inside mod_start/mod_end: You can omit explicit exit status checks immediately after utils.sh modification helpers (write_file, del_line, ins_after, etc.). Rely on set -e (enabled by mod_start) to automatically halt on the first error and the utils.sh trap/cleanup mechanism for rollback and detailed error logging (including context) to stderr. Exception: If a find_* operation within the mod_start/mod_end block might legitimately not find a match, use an if condition to handle the "not found" case gracefully without exiting the whole script, but still exit on actual command failures.
  3. PRIMARY METHOD: Dynamic Content Construction (find_* -> read_* -> write_file)

    • This is the most robust and preferred method, especially for multi-line changes or when context might be unstable.
    • Steps:
      1. (Locate): Before mod_start, use read-only finders (find_line, find_line_num_after, find_block_lines). Check $? and exit 1 on failure.
      2. (Extract): Before mod_start, use read-only extractors (read_before_line, read_between_lines, read_after_line, etc.) with the found line numbers/patterns. Check $? and exit 1 on failure.
      3. (Modify/Define): Define new code snippets or modify extracted content using Bash tools (sed, etc.). Use quoted heredocs (variable=$(cat <<'EOF'...)) MANDATORILY for multi-line content.
      4. (Combine): Concatenate parts into a final variable using printf.
      5. (Apply): Inside mod_start/mod_end, use write_file "file" "$final_content". No explicit error check needed after write_file here.
    • Benefits: Handles shifting line numbers, works reliably even with large changes, less prone to context ambiguity.
  4. Alternative: Dynamic Line Edits (find_* -> del_line/ins_after)

    • Suitable For: Deleting or inserting relative to a specific, known line.
    • Process:
      • Inside mod_start/mod_end:
      • Use find_line or find_line_num_after to get the target line number.
      • Use an if condition to check if the line was found ($? -eq 0).
      • If found, call del_line or ins_after. No explicit error check needed after del_line/ins_after here.
      • If not found, handle gracefully (e.g., log a warning, skip the step) within the if block's else.
    • WARNING: Avoid using hardcoded or assumed line numbers. Find the line number immediately before the modification call if multiple modifications occur. Delete lines in reverse order if deleting multiple lines by number.
  5. Alternative: repl_between (Use Sparingly)

    • Suitable For: Only simple, single-block replacements where the lines immediately before and after the block are highly unique, stable, and short (~1-3 lines, <10 words).
    • Process: Define start_context, end_context, new_content using quoted heredocs. Inside mod_start/mod_end, call repl_between "file" "$start_context" "$end_context" "$new_content". No explicit error check needed after repl_between here.
    • WARNING: This method proved brittle in complex scenarios. Prefer Dynamic Content Construction if unsure. Normalization ignores whitespace/newlines during matching.
  6. Quoted Heredocs: MANDATORY for defining any multi-line content (variable=$(cat <<'EOF' ... EOF)).

  7. Prettier Compliance: Ensure generated code snippets adhere to the project's Prettier config (assume { "semi": false, "singleQuote": true, "trailingComma": "es5" } if unspecified).

Fallback Scenario (write_file with Static Content - STRONGLY DISCOURAGED):

Avoid defining the entire file content statically in a heredoc and using write_file. Always prefer the Dynamic Content Construction method described above. Only consider the static approach under extreme circumstances (e.g., tiny file, total rewrite from scratch) and justify it clearly in the summary.

utils.sh Purpose & Features:

Provides a robust framework for safe refactoring:

  • Setup/Teardown/Traps (setup_op, mod_start, mod_end).
  • Git workspace checks.
  • Atomic modification phase (set -e) with backup-based rollback.
  • Contextual error logging (lines around failures) to stderr.
  • Content Access: read_curr, read_head.
  • Read-Only Finders: find_line, find_line_num_after, find_block_lines.
  • Read-Only Extractors: read_before_line, read_before_regex, read_after_line, read_after_regex, read_between_lines, read_between_regex, read_block (alias for read_between_lines).
  • Modification Helpers: del_line, ins_after, repl_line, sub_in_line, repl_between, write_file, new_file, del_file, mv_file. (Auto-backup, auto parent-dir creation for new_file/mv_file).
  • Validation: Targeted Prettier execution with Git restore attempt on failure for tracked files.

Usage Workflow example (Preferred Method: Dynamic Content Construction):

note: NEVER add any bash comments to your final script to keep its length as short as possible. Comments below are for illustration only.

#!/bin/bash
source ./utils.sh || exit 1

DIRS_TO_CHECK=("src", "another/dir") # Mandatory, all paths touched by the script
FILES_TO_MODIFY=("src/billing.js") # Optional, if existing files are modified
# FILES_TO_CREATE=("path/to/your/newfile.js") # Optional, if creating new files
# FILES_TO_MOVE=("path/to/your/oldfile.js" "path/to/your/newfile.js") # Optional, if moving files
# FILES_TO_DELETE=("path/to/your/oldfile.js") # Optional, if deleting files

setup_op

start_line=$(try_run find_line "src/billing.js" "unique start pattern")
end_line=$(try_run find_line_num_after "src/billing.js" "$start_line" "unique end pattern")
# If any find_* helper fails, try_run logs the error and exits the script.

content_before=$(try_run read_before_line "src/billing.js" "$start_line")
content_middle=$(try_run read_between_lines "src/billing.js" "$start_line" "$end_line")
content_after=$(try_run read_after_line "src/billing.js" "$end_line")

new_middle_block=$(cat <<'EOF'
// Your new replacement code block here
// Adhering to Prettier rules
EOF
)

final_content=$(printf "%s\n%s\n%s" "$content_before" "$new_middle_block" "$content_after")

mod_start # starting modification phase

write_file "path/to/your/file.js" "$final_content"

mod_end

Helper Function Reference:

  • Read-Only Access:
    • read_curr(file): Content from working directory.
    • read_head(file): Content from HEAD commit.
  • Read-Only Finders:
    • find_line(file, regex): -> Line number (stdout). $? is non-zero if not found.
    • find_line_num_after(file, start_line, regex): -> Line number (stdout). $? is non-zero if not found.
    • find_block_lines(file, start_regex, end_regex): -> <start> <end> (stdout). $? is non-zero if not found.
  • Read-Only Extractors:
    • read_before_line(file, line_num): -> Content (stdout).
    • read_before_regex(file, regex): -> Content (stdout).
    • read_after_line(file, line_num): -> Content (stdout).
    • read_after_regex(file, regex): -> Content (stdout).
    • read_between_lines(file, start_line, end_line): -> Content (stdout). (Alias/replacement for read_block).
    • read_between_regex(file, start_regex, end_regex, [--exclusive]): -> Content (stdout).
  • Modification Helpers (Use inside mod_start/mod_end):
    • del_line(file, line_num): Deletes line. Use with dynamic line_num.
    • ins_after(file, line_num, content_var): Inserts after line. Use with dynamic line_num.
    • sub_in_line(file, pattern, replacement, [delimiter], [sed_opts]): Substitute in line.
    • repl_line(file, regex, new_content_var): Replace whole line matching regex.
    • repl_between(file, start_context_var, end_context_var, new_content_var): Replace block between context. Use only for simple, stable cases.
    • write_file(file, content_variable): Overwrite file. Primary method for applying constructed content.
    • new_file(file, content_variable): Creates file (incl. dirs).
    • del_file(file): Deletes file.
    • mv_file(source, destination): Moves file (incl. dirs).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment