You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A prompt for forcing AI to use mermaid more effectively
Your primary function is to transform ANY textual diagram idea, natural language description, malformed/incomplete Mermaid code, or embedded Mermaid blocks within Markdown into production-ready, syntactically pristine, visually compelling, and interactive Mermaid diagrams. You will also provide micro-documentation via a concise changelog and embedded tooltips. Your core operational logic is derived from the comprehensive Mermaid syntax and feature compendium detailed herein.
I. OPERATIONAL PHASES (Your Refinement Lifecycle)
Phase 1: Input Ingestion & Contextual Analysis
Isolate Mermaid Content: If input is Markdown, extract content from ```mermaid ... ``` blocks. For other inputs, identify the core diagram-related text.
If you're a TypeScript developer building applications, you know how quickly complexity can grow. Tracking down errors, understanding performance bottlenecks, and getting visibility into your application's health in production can feel like detective work without the right tools. This is where Sentry comes in.
Sentry is a powerful application monitoring platform that helps you identify, triage, and prioritize errors and performance issues in real-time. While it supports a wide range of languages and platforms, its JavaScript SDKs, which are fully compatible with TypeScript, offer a wealth of features specifically tailored for the nuances of the web browser and Node.js environments where TypeScript thrives.
The goal of this whitepaper is to walk you through the top features of Sentry from the perspective of a TypeScript developer. We'll explore how to set up Sentry, how it automatically captures crucial data, how you can enrich that data with context specific to your application, techniques for managing data
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Welcome to your complete guide to Mem0, the intelligent memory layer designed specifically for next-generation AI applications. In this guide, you'll get a clear, detailed understanding of Mem0—from the core ideas and underlying architecture to real-world implementation and strategic advantages. We'll dive into exactly how Mem0 addresses one of the key shortcomings of current AI systems, especially those powered by Large Language Models (LLMs): their inability to retain persistent memory across interactions. By providing an advanced way to store, manage, and retrieve context over time, Mem0 allows developers to create AI apps that are far more personalized, consistent, adaptive, and efficient. Since the AI community's needs vary widely, Mem0 is available as both a fully managed cloud service—for convenience and scalability—as well as an open-source solution, offering maximum flexibility and control and positioning it as a foundational part of modern stateful AI stacks.
My llms.txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's PythonSDK.
LiteLLM Python SDK: The Developer Reference (like llms.txt)
Welcome! This guide provides an exhaustive reference to the LiteLLM Python SDK, meticulously detailing its functionalities, configurations, utilities, and underlying concepts. It is designed for developers seeking the deepest possible understanding and coverage of LiteLLM's capabilities for integrating over 100+ Large Language Models (LLMs).
Core Value Proposition: LiteLLM provides a unified, consistent interface (often OpenAI-compatible) to interact with a vast array of LLM providers (OpenAI, Azure OpenAI, Anthropic (Claude), Google (Gemini, Vertex AI), Cohere, Mistral, Bedrock, Ollama, Hugging Face, Replicate, Perplexity, Groq, etc.). This allows you to write model-interaction code once and switch providers primarily by changing the model parameter, while benefiting from standardized features like credential management, error handling, retries, fallbacks, logging, caching, and routing.
My LLMs txt series introduces valuable context about several repos I'm using daily. This one is most comprehensive guide on web about LiteLLM's config parameters. It is useful for LLMs like deepseek/gemini-2-thinking to configure LiteLLM as most advanced way possible.
Table of Contents
Understanding config.yaml Structure
environment_variables Section: Defining Variables within YAML
model_list Section: Defining Models and Deployments
Cloudflare Workers revolutionize how developers build and deploy applications by running code directly on Cloudflare's global edge network. This section delves into the foundational aspects of Cloudflare Workers, providing a comprehensive understanding of their definition, benefits, core concepts, supported languages, infrastructure, and various use cases.
1.1 Definition of Cloudflare Workers
Cloudflare Workers are serverless applications that execute JavaScript, TypeScript, Python, Rust, and WebAssembly (WASM) code directly on Cloudflare’s edge servers. Operating at over 300 data centers worldwide, Workers intercept HTTP requests and responses, allowing developers to manipulate, enhance, or respond to web traffic in real-time.
How to optimize MediaRecorder for High-Quality Audio Suitable for OpenAI’s Whisper Model
To achieve the best transcription results with OpenAI’s Whisper model, it’s essential to capture audio in a format that aligns with Whisper's specifications:
Format: Mono, uncompressed (WAV) or losslessly compressed (FLAC)
Sample Depth: 16-bit
Sampling Rate: 16 kHz
While the MediaStream Recording API (MediaRecorder) primarily records audio in compressed formats like WebM or Ogg with codecs such as Opus or Vorbis, you can optimize its settings to approximate Whisper’s requirements as closely as possible. Additionally, post-processing steps (e.g., using ffmpeg) can convert the recorded audio into the desired format.
This guide outlines how to configure MediaRecorder for optimal audio quality, focusing on audio-related options, and provides steps for converting the recorded audio to WAV or FLAC.
Use this script to extract content of any project by only including text-only files and exclusing files like /env or /node_modules to use LLM's attention effectively.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Display the project structure and source code organized by directories and file extensions
(
# 1. Display the Project Structure excluding specified directories and files
echo "# Project Structure"
tree -I 'node_modules|env|venv|ENV|.git|.svn|.hg|.bzr|build|dist|target|out|bin|obj|__pycache__|.cache|cache|.pytest_cache|logs|*.log|*.tmp|*.class|*.o|*.so|*.dll|*.exe|*.pyc|*.pyo|*.whl|.idea|.vscode|.DS_Store|Thumbs.db|*~|#*#|.*.swp|.*.swo|.*.tmp|.*.temp'
echo
echo "# Source Code"
# 2. Find all relevant text files, including specific hidden files, while excluding others