Skip to content

Instantly share code, notes, and snippets.

View yigitkonur's full-sized avatar
🗿
Work hard, play soft

Yigit Konur yigitkonur

🗿
Work hard, play soft
View GitHub Profile

If you're a TypeScript developer building applications, you know how quickly complexity can grow. Tracking down errors, understanding performance bottlenecks, and getting visibility into your application's health in production can feel like detective work without the right tools. This is where Sentry comes in.

Sentry is a powerful application monitoring platform that helps you identify, triage, and prioritize errors and performance issues in real-time. While it supports a wide range of languages and platforms, its JavaScript SDKs, which are fully compatible with TypeScript, offer a wealth of features specifically tailored for the nuances of the web browser and Node.js environments where TypeScript thrives.

The goal of this whitepaper is to walk you through the top features of Sentry from the perspective of a TypeScript developer. We'll explore how to set up Sentry, how it automatically captures crucial data, how you can enrich that data with context specific to your application, techniques for managing data

@yigitkonur
yigitkonur / llms-txt-mem0
Created April 12, 2025 04:39
mem0 comprehensive guide (context for my LLMs)
Welcome to your complete guide to Mem0, the intelligent memory layer designed specifically for next-generation AI applications. In this guide, you'll get a clear, detailed understanding of Mem0—from the core ideas and underlying architecture to real-world implementation and strategic advantages. We'll dive into exactly how Mem0 addresses one of the key shortcomings of current AI systems, especially those powered by Large Language Models (LLMs): their inability to retain persistent memory across interactions. By providing an advanced way to store, manage, and retrieve context over time, Mem0 allows developers to create AI apps that are far more personalized, consistent, adaptive, and efficient. Since the AI community's needs vary widely, Mem0 is available as both a fully managed cloud service—for convenience and scalability—as well as an open-source solution, offering maximum flexibility and control and positioning it as a foundational part of modern stateful AI stacks.
Our goal for this guide was simple but
@yigitkonur
yigitkonur / llms-txt-litellm-client-sdk-context.md
Created March 29, 2025 20:34
My llms.txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's PythonSDK.

LiteLLM Python SDK: The Developer Reference (like llms.txt)

Welcome! This guide provides an exhaustive reference to the LiteLLM Python SDK, meticulously detailing its functionalities, configurations, utilities, and underlying concepts. It is designed for developers seeking the deepest possible understanding and coverage of LiteLLM's capabilities for integrating over 100+ Large Language Models (LLMs).

Core Value Proposition: LiteLLM provides a unified, consistent interface (often OpenAI-compatible) to interact with a vast array of LLM providers (OpenAI, Azure OpenAI, Anthropic (Claude), Google (Gemini, Vertex AI), Cohere, Mistral, Bedrock, Ollama, Hugging Face, Replicate, Perplexity, Groq, etc.). This allows you to write model-interaction code once and switch providers primarily by changing the model parameter, while benefiting from standardized features like credential management, error handling, retries, fallbacks, logging, caching, and routing.

Who Should Use This Guide:

  • Deve
@yigitkonur
yigitkonur / llms-txt-litellm-configuration-context.md
Last active May 13, 2025 16:47
My LLMs txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's config parameters. It is useful for LLMs like deepseek/gemini-2-thinking to configure LiteLLM as most advanced way possible.

Table of Contents

  • Understanding config.yaml Structure
  • environment_variables Section: Defining Variables within YAML
  • model_list Section: Defining Models and Deployments
    • Model List Parameter Summary
    • model_name
    • litellm_params
      • model
      • api_key
@yigitkonur
yigitkonur / cloudflare-workers.md
Last active January 17, 2025 12:21
AI Instruction Content for Cloudflare Workers

Cloudflare Workers Bible

1. FOUNDATIONS OF CLOUDFLARE WORKERS

Cloudflare Workers revolutionize how developers build and deploy applications by running code directly on Cloudflare's global edge network. This section delves into the foundational aspects of Cloudflare Workers, providing a comprehensive understanding of their definition, benefits, core concepts, supported languages, infrastructure, and various use cases.

1.1 Definition of Cloudflare Workers

Cloudflare Workers are serverless applications that execute JavaScript, TypeScript, Python, Rust, and WebAssembly (WASM) code directly on Cloudflare’s edge servers. Operating at over 300 data centers worldwide, Workers intercept HTTP requests and responses, allowing developers to manipulate, enhance, or respond to web traffic in real-time.

@yigitkonur
yigitkonur / MediaRecorder-Settings-for-Whisper.md
Last active October 31, 2024 04:48
How to optimize MediaRecorder for High-Quality Audio Suitable for OpenAI’s Whisper Model

To achieve the best transcription results with OpenAI’s Whisper model, it’s essential to capture audio in a format that aligns with Whisper's specifications:

  • Format: Mono, uncompressed (WAV) or losslessly compressed (FLAC)
  • Sample Depth: 16-bit
  • Sampling Rate: 16 kHz

While the MediaStream Recording API (MediaRecorder) primarily records audio in compressed formats like WebM or Ogg with codecs such as Opus or Vorbis, you can optimize its settings to approximate Whisper’s requirements as closely as possible. Additionally, post-processing steps (e.g., using ffmpeg) can convert the recorded audio into the desired format.

This guide outlines how to configure MediaRecorder for optimal audio quality, focusing on audio-related options, and provides steps for converting the recorded audio to WAV or FLAC.

@yigitkonur
yigitkonur / continue-prompt-for-cursor.md
Created October 29, 2024 20:33
Best continue prompt after my evals to implement given task, by including multi-round review to make sure everything is completed

Great job, you can continue only by following this steps:

Task Completion Workflow

  1. Review Previous Implementation

    • Examine the prior work to ensure it has been implemented correctly.
  2. Identify Remaining Tasks

    • List out all tasks that are yet to be completed.
@yigitkonur
yigitkonur / extract-github-repo-for-llms.sh
Last active October 29, 2024 20:02
Use this script to extract content of any project by only including text-only files and exclusing files like /env or /node_modules to use LLM's attention effectively.
# Display the project structure and source code organized by directories and file extensions
(
# 1. Display the Project Structure excluding specified directories and files
echo "# Project Structure"
tree -I 'node_modules|env|venv|ENV|.git|.svn|.hg|.bzr|build|dist|target|out|bin|obj|__pycache__|.cache|cache|.pytest_cache|logs|*.log|*.tmp|*.class|*.o|*.so|*.dll|*.exe|*.pyc|*.pyo|*.whl|.idea|.vscode|.DS_Store|Thumbs.db|*~|#*#|.*.swp|.*.swo|.*.tmp|.*.temp'
echo
echo "# Source Code"
# 2. Find all relevant text files, including specific hidden files, while excluding others
@yigitkonur
yigitkonur / submit_batch_jobs.sh
Created July 30, 2024 15:43
This script automates the submission of Google Cloud Batch jobs for decompressing large volumes of files stored in GCS buckets, overcoming limitations of other cloud services. It reads from a CSV file containing file paths and required disk sizes, generates and submits job configurations with robust error handling and logging, and utilizes a cus…
#!/bin/bash
# Bulk GCS Decompression Job Submitter
#
# Purpose: Automate submission of Google Cloud Batch jobs for decompressing large volumes of files in GCS.
#
# Background:
# - Cloud Functions/Run: Limited by 10GB disk space
# - Dataflow: Too complex for simple decompression
# - Solution: Use single machine with SSD, then upload results back to GCS
@yigitkonur
yigitkonur / fix_inconsistent_csv.py
Created April 25, 2024 20:18
This Python script is designed to fix CSV files by removing any rows that have an incorrect number of fields compared to the header row. It reads the CSV file, checks each row against the expected number of fields, and creates a new file with only the valid rows. The original file is then replaced with the updated file, ensuring that the problem…
import csv
import sys
import os
def check_and_update_csv(file_path):
updated_rows = []
with open(file_path, 'r') as file:
reader = csv.reader(file)
header = next(reader)
expected_fields = len(header)