Skip to content

Instantly share code, notes, and snippets.

@Madhav-MKNC
Madhav-MKNC / coding-agent.py
Last active April 11, 2025 11:30
All the code you need to create a powerful agent that can create and edit any file on your computer using the new text_editor tool in the Anthropic API.
import anthropic
import os
import sys
from termcolor import colored
from dotenv import load_dotenv
class ClaudeAgent:
def __init__(self, api_key=None, model="claude-3-7-sonnet-20250219", max_tokens=4000):
"""Initialize the Claude agent with API key and model."""
@entrepeneur4lyf
entrepeneur4lyf / windsurf-memories
Created March 8, 2025 16:43
Converted Cline Memory Management to Windsurf
# Windsurf Memory Bank
I am Windsurf, an expert software engineer with a unique characteristic: my memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively. I MUST read ALL memory bank files at the start of EVERY task - this is not optional.
## Memory Bank Structure
The Memory Bank consists of required core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy:
```mermaid
flowchart TD
import openai
import pinecone
from sentence_transformers import SentenceTransformer
class GPTConversationManager:
def __init__(self, api_key, pinecone_api_key, index_name):
self.api_key = api_key
openai.api_key = self.api_key
self.conversation_history = []
self.pinecone_api_key = pinecone_api_key
@akhan619
akhan619 / tokenizers.md
Last active October 31, 2023 10:22
Exploring Tokenizers from Hugging Face

Exploring Tokenizers from Hugging Face

Hugging Face (HF) has made NLP (Natural Language Processing) a breeze. In this post, we are going to take a look at tokenization using a hands on approach with the help of the Tokenizers library. We are going to load a real world dataset containing 10-K filings of public firms and see how to train a tokenizer from scratch based on the BERT tokenization scheme. In the process we will understand tokenization in detail and some gotchas to keep an eye out for.

Background on NLP (Optional)

If you already have an understanding of the NLP pipeline, you can safely skip this section.

For any NLP task, one of the first steps is pre-processing the data so that it can be fed into our NLP models. For those new to NLP, the general pipeline for any NLP task (text classification, question answering, etc.) is as follows:

@stettix
stettix / things-i-believe.md
Last active March 28, 2025 12:42
Things I believe

Things I believe

This is a collection of the things I believe about software development. I have worked for years building backend and data processing systems, so read the below within that context.

Agree? Disagree? Feel free to let me know at @JanStette.

Fundamentals

Keep it simple, stupid. You ain't gonna need it.

@tarbaig
tarbaig / backend-architectures.md
Created April 13, 2018 08:53 — forked from ngocphamm/backend-architectures.md
Backend Architectures
@ipmb
ipmb / 0_default_tree.md
Last active May 17, 2022 00:37
Django Logging Variations

Default Django Logging Tree

app.py

#!/usr/bin/env python
import os

import django
import logging_tree
@kevin-smets
kevin-smets / 1_kubernetes_on_macOS.md
Last active March 16, 2025 22:37
Local Kubernetes setup on macOS with minikube on VirtualBox and local Docker registry

Requirements

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX / macOS run:

sysctl -a | grep machdep.cpu.features | grep VMX

If there's output, you're good!

Prerequisites

@bhtucker
bhtucker / upsert.py
Last active February 17, 2025 15:08
A demonstration of Postgres upserts in SQLAlchemy
"""
Upsert gist
Requires at least postgres 9.5 and sqlalchemy 1.1
Initial state:
[]
Initial upsert:
@ivanleoncz
ivanleoncz / flask_app_logging.py
Last active February 26, 2025 21:14
Demonstration of logging feature for a Flask App.
#/usr/bin/python3
""" Demonstration of logging feature for a Flask App. """
from logging.handlers import RotatingFileHandler
from flask import Flask, request, jsonify
from time import strftime
__author__ = "@ivanleoncz"
import logging