Skip to content

Instantly share code, notes, and snippets.

View yigitkonur's full-sized avatar
🗿
Work hard, play soft

Yigit Konur yigitkonur

🗿
Work hard, play soft
View GitHub Profile
@yigitkonur
yigitkonur / comprehensive-eslint-rules.md
Created August 13, 2025 07:01
a context for guiding LLM to choose best ESLint rules for your projects

A Strategic Framework for Code Quality: From Linting Rules to Production Resilience

Let's try to understand each of them by reviewing that summary:

ESLint Core "Possible Problems" Rules (Summary Table)

Rule Name Core Problem Prevented Fixable (🔧) Recommended (✅) Primary Production Benefit
array-callback-return Unexpected undefined values from array methods. 🔧 Data Integrity
constructor-super ReferenceError in derived class constructors. Stability
@yigitkonur
yigitkonur / llm-mermaid.md
Created June 3, 2025 04:57
A prompt for forcing AI to use mermaid more effectively

Your primary function is to transform ANY textual diagram idea, natural language description, malformed/incomplete Mermaid code, or embedded Mermaid blocks within Markdown into production-ready, syntactically pristine, visually compelling, and interactive Mermaid diagrams. You will also provide micro-documentation via a concise changelog and embedded tooltips. Your core operational logic is derived from the comprehensive Mermaid syntax and feature compendium detailed herein.


I. OPERATIONAL PHASES (Your Refinement Lifecycle)

Phase 1: Input Ingestion & Contextual Analysis

  1. Isolate Mermaid Content: If input is Markdown, extract content from ```mermaid ... ``` blocks. For other inputs, identify the core diagram-related text.
  2. Pre-sanitize: Normalize basic whitespace; identify explicit user flags (theme:, type:, layout:).
  3. Diagram Type & Layout Inference (See Section II: Inference Matrix): Determine the most appropriate Mermaid diagram type and initial

If you're a TypeScript developer building applications, you know how quickly complexity can grow. Tracking down errors, understanding performance bottlenecks, and getting visibility into your application's health in production can feel like detective work without the right tools. This is where Sentry comes in.

Sentry is a powerful application monitoring platform that helps you identify, triage, and prioritize errors and performance issues in real-time. While it supports a wide range of languages and platforms, its JavaScript SDKs, which are fully compatible with TypeScript, offer a wealth of features specifically tailored for the nuances of the web browser and Node.js environments where TypeScript thrives.

The goal of this whitepaper is to walk you through the top features of Sentry from the perspective of a TypeScript developer. We'll explore how to set up Sentry, how it automatically captures crucial data, how you can enrich that data with context specific to your application, techniques for managing data

@yigitkonur
yigitkonur / llms-txt-mem0
Created April 12, 2025 04:39
mem0 comprehensive guide (context for my LLMs)
Welcome to your complete guide to Mem0, the intelligent memory layer designed specifically for next-generation AI applications. In this guide, you'll get a clear, detailed understanding of Mem0—from the core ideas and underlying architecture to real-world implementation and strategic advantages. We'll dive into exactly how Mem0 addresses one of the key shortcomings of current AI systems, especially those powered by Large Language Models (LLMs): their inability to retain persistent memory across interactions. By providing an advanced way to store, manage, and retrieve context over time, Mem0 allows developers to create AI apps that are far more personalized, consistent, adaptive, and efficient. Since the AI community's needs vary widely, Mem0 is available as both a fully managed cloud service—for convenience and scalability—as well as an open-source solution, offering maximum flexibility and control and positioning it as a foundational part of modern stateful AI stacks.
Our goal for this guide was simple but
@yigitkonur
yigitkonur / llms-txt-litellm-client-sdk-context.md
Created March 29, 2025 20:34
My llms.txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's PythonSDK.

LiteLLM Python SDK: The Developer Reference (like llms.txt)

Welcome! This guide provides an exhaustive reference to the LiteLLM Python SDK, meticulously detailing its functionalities, configurations, utilities, and underlying concepts. It is designed for developers seeking the deepest possible understanding and coverage of LiteLLM's capabilities for integrating over 100+ Large Language Models (LLMs).

Core Value Proposition: LiteLLM provides a unified, consistent interface (often OpenAI-compatible) to interact with a vast array of LLM providers (OpenAI, Azure OpenAI, Anthropic (Claude), Google (Gemini, Vertex AI), Cohere, Mistral, Bedrock, Ollama, Hugging Face, Replicate, Perplexity, Groq, etc.). This allows you to write model-interaction code once and switch providers primarily by changing the model parameter, while benefiting from standardized features like credential management, error handling, retries, fallbacks, logging, caching, and routing.

Who Should Use This Guide:

  • Deve
@yigitkonur
yigitkonur / llms-txt-litellm-configuration-context.md
Last active September 19, 2025 03:26
My LLMs txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's config parameters. It is useful for LLMs like deepseek/gemini-2-thinking to configure LiteLLM as most advanced way possible.

Table of Contents

  • Understanding config.yaml Structure
  • environment_variables Section: Defining Variables within YAML
  • model_list Section: Defining Models and Deployments
    • Model List Parameter Summary
    • model_name
    • litellm_params
      • model
      • api_key
@yigitkonur
yigitkonur / cloudflare-workers.md
Last active January 17, 2025 12:21
AI Instruction Content for Cloudflare Workers

Cloudflare Workers Bible

1. FOUNDATIONS OF CLOUDFLARE WORKERS

Cloudflare Workers revolutionize how developers build and deploy applications by running code directly on Cloudflare's global edge network. This section delves into the foundational aspects of Cloudflare Workers, providing a comprehensive understanding of their definition, benefits, core concepts, supported languages, infrastructure, and various use cases.

1.1 Definition of Cloudflare Workers

Cloudflare Workers are serverless applications that execute JavaScript, TypeScript, Python, Rust, and WebAssembly (WASM) code directly on Cloudflare’s edge servers. Operating at over 300 data centers worldwide, Workers intercept HTTP requests and responses, allowing developers to manipulate, enhance, or respond to web traffic in real-time.

@yigitkonur
yigitkonur / MediaRecorder-Settings-for-Whisper.md
Last active October 31, 2024 04:48
How to optimize MediaRecorder for High-Quality Audio Suitable for OpenAI’s Whisper Model

To achieve the best transcription results with OpenAI’s Whisper model, it’s essential to capture audio in a format that aligns with Whisper's specifications:

  • Format: Mono, uncompressed (WAV) or losslessly compressed (FLAC)
  • Sample Depth: 16-bit
  • Sampling Rate: 16 kHz

While the MediaStream Recording API (MediaRecorder) primarily records audio in compressed formats like WebM or Ogg with codecs such as Opus or Vorbis, you can optimize its settings to approximate Whisper’s requirements as closely as possible. Additionally, post-processing steps (e.g., using ffmpeg) can convert the recorded audio into the desired format.

This guide outlines how to configure MediaRecorder for optimal audio quality, focusing on audio-related options, and provides steps for converting the recorded audio to WAV or FLAC.

@yigitkonur
yigitkonur / continue-prompt-for-cursor.md
Created October 29, 2024 20:33
Best continue prompt after my evals to implement given task, by including multi-round review to make sure everything is completed

Great job, you can continue only by following this steps:

Task Completion Workflow

  1. Review Previous Implementation

    • Examine the prior work to ensure it has been implemented correctly.
  2. Identify Remaining Tasks

    • List out all tasks that are yet to be completed.
@yigitkonur
yigitkonur / extract-github-repo-for-llms.sh
Last active October 29, 2024 20:02
Use this script to extract content of any project by only including text-only files and exclusing files like /env or /node_modules to use LLM's attention effectively.
# Display the project structure and source code organized by directories and file extensions
(
# 1. Display the Project Structure excluding specified directories and files
echo "# Project Structure"
tree -I 'node_modules|env|venv|ENV|.git|.svn|.hg|.bzr|build|dist|target|out|bin|obj|__pycache__|.cache|cache|.pytest_cache|logs|*.log|*.tmp|*.class|*.o|*.so|*.dll|*.exe|*.pyc|*.pyo|*.whl|.idea|.vscode|.DS_Store|Thumbs.db|*~|#*#|.*.swp|.*.swo|.*.tmp|.*.temp'
echo
echo "# Source Code"
# 2. Find all relevant text files, including specific hidden files, while excluding others