Skip to content

Instantly share code, notes, and snippets.

View mhrlife's full-sized avatar
😁
Learning Go

Mohammad hoseini rad mhrlife

😁
Learning Go
View GitHub Profile
@mhrlife
mhrlife / prompt-auditor-system-prompt.md
Created November 27, 2025 07:36
System prompt for a Prompt Analysis Expert that evaluates LLM prompts across 8 dimensions: contradictions, ambiguity, punctuation/grammar, semantic consistency, negative language, complexity/overloading, context sufficiency, and syntactic structure. Returns structured analysis with identified issues, solutions, and a revised prompt.

System Prompt: Prompt Analysis Expert

You are a specialized Prompt Analyzer that evaluates text prompts for clarity, coherence, and LLM comprehension. Your role is to identify potential issues that could confuse or misdirect language models, then provide specific, actionable solutions.

Your Analysis Framework

1. Contradictions & Conflicting Instructions

  • Identify terms or instructions that oppose each other
  • Look for multiple directives that cannot be simultaneously satisfied
  • Flag instructions that pull the model in different directions
@mhrlife
mhrlife / my_personal_agent_prompt.md
Created September 1, 2025 05:00
A system prompt for an AI assistant specialized in computer engineering. Provides PhD-level technical answers with citations, featuring a two-part response format: concise answer + storytelling explanation with formal definitions. Optimized for ADHD-friendly reading. (I use it with Opus 4.1)

You are a personal agent for me. You find answers to computer engineering questions. I am a senior-level programmer with advanced education. You don't give naive or simple answers. Your answers are PhD-level, backed by sources, without any assumptions or guesses.

Speaking Style

Your answer must be in two parts:

  1. Concise Answer: Low verbosity, to-the-point, ADHD-friendly response
  2. Detailed Explanation: Medium verbosity, ADHD-friendly narrative that explains the concept through storytelling.
// Necessary imports
import com.intellij.openapi.actionSystem.*
import com.intellij.openapi.application.ApplicationManager
import com.intellij.openapi.fileEditor.*
import com.intellij.openapi.project.Project
import com.intellij.openapi.util.Key
import com.intellij.openapi.vfs.*
import java.awt.Toolkit
import java.awt.datatransfer.StringSelection
func validateInitData(inputData, botToken string) (bool, error) {
initData, err := url.ParseQuery(inputData)
if err != nil {
logrus.WithError(err).Errorln("couldn't parse web app input data")
return false, err
}
dataCheckString := make([]string, 0, len(initData))
for k, v := range initData {
if k == "hash" {
/*
create a file and name it: food
meat kebab 5
meat stake 5
meat ice-cream 0
meat baklava 0
meat tea 0
meat coffee 0
meat burger 4
meat hot-dog 2
@mhrlife
mhrlife / render.go
Created January 1, 2024 16:55
Render templ component in Go-Echo
func Render(c echo.Context, comp templ.Component) error {
c.Response().Header().Set(echo.HeaderContentType, echo.MIMETextHTML)
return comp.Render(c.Request().Context(), c.Response().Writer)
}
//go:embed gcra.lua
var gcraScript string
type RateLimit struct {
rdb *redis.Client
prefix string
gcra *redis.Script
timeout time.Duration
}
@mhrlife
mhrlife / gcra.lua
Created November 7, 2023 16:09
GCRA Rate limiter implementation with Lua
redis.replicate_commands()
local rate_limit_key = KEYS[1]
local burst = ARGV[1]
local emission_interval = tonumber(ARGV[2])
-- calculating time using this idea (https://github.com/rwz/redis-gcra/blob/master/vendor/perform_gcra_ratelimit.lua)
local jan_1_2017 = 1483228800
local now = redis.call("TIME")
now = (now[1] - jan_1_2017) + (now[2] / 1000000)
package main
import (
"context"
"errors"
"fmt"
"github.com/redis/go-redis/v9"
"sync"
"time"
)
package main
import (
"context"
"fmt"
"github.com/redis/go-redis/v9"
"log"
"time"
)