Skip to content

Instantly share code, notes, and snippets.

View abodacs's full-sized avatar

Abdullah Mohammed abodacs

View GitHub Profile
@chriscarrollsmith
chriscarrollsmith / llm-hackathon-submissions.md
Last active July 9, 2025 11:42
Writeup of submissions to the Coders' Colaboratory `llm` hackathon in Latham, New York

Projects

Runner-Up: Doctor of Credit

Prerequisites

Google Chrome CLI entrypoint

@aessam
aessam / claude-code-dependency-graph-prompt.md
Created June 22, 2025 21:45
Universal prompt for Claude Code to generate interactive multi-level dependency graphs for any codebase. Creates beautiful D3.js visualizations with 3 zoom levels: system overview (40k ft), module view (10k ft), and full dependency graph. Works with Rust, JavaScript, Python, Go, Java, and more.

Analyze this codebase and create a multi-level interactive dependency graph visualization as a single HTML file.

Level 1 - System Overview (40,000ft view):

  • Show only top-level directories/subsystems
  • Group by: Frontend, Backend, Core, Database, Tests, Tools, External
  • Display as large nodes with # of files and primary language
  • Show only major connections between subsystems

Level 2 - Module View (10,000ft view):

  • Show ~30-50 most important modules/packages
@ThanosPapathanasiou
ThanosPapathanasiou / todo.cs
Last active June 27, 2025 20:16
todo app with dotnet10, htmx and bulma
#:sdk Microsoft.NET.Sdk.Web
using System.IO;
using System.Text;
using System.Text.Json;
using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Filters;
@corbt
corbt / 1_results.txt
Last active July 7, 2025 17:46
Benchmark script for reward model performance
Strategy | Relative Throughput | Time (s) | Cost ($/M tokens)
----------------------------------------------------------------------------------------
Unsloth | 2.17 | 3.83 | $0.0188
Unsloth+PEFT | 1.58 | 5.27 | $0.0259
Transformers+Liger | 1.14 | 7.28 | $0.0358
vLLM | 1.00 | 8.31 | $0.0409
Transformers | 0.97 | 8.54 | $0.0420
Transformers+Liger+PEFT | 0.84 | 9.85 | $0.0484
Transformers+PEFT | 0.74 | 11.26 | $0.0554

Question: Should I avoid using RAG for my AI application after reading that "RAG is dead" for coding agents?

Many developers are confused about when and how to use RAG after reading articles claiming "RAG is dead." Understanding what RAG actually means versus the narrow marketing definitions will help you make better architectural decisions for your AI applications.

Answer: The viral article claiming RAG is dead specifically argues against using naive vector database retrieval for autonomous coding agents, not RAG as a whole. This is a crucial distinction that many developers miss due to misleading marketing.

RAG simply means Retrieval-Augmented Generation - using retrieval to provide relevant context that improves your model's output. The core principle remains essential: your LLM needs the right context to generate accurate answers. The question isn't whether to use retrieval, but how to retrieve effectively.

For coding

@stevebauman
stevebauman / vite.config.js
Last active May 8, 2025 01:08
Vite Server Cors Allow Any Subdomain
import { defineConfig, loadEnv } from 'vite';
// ...
export default defineConfig(({ mode }) => {
const env = loadEnv(mode, process.cwd());
const { protocol, hostname } = new URL(env.VITE_URL);
const root = hostname.split('.').slice(-2).join('\\.');
@perfectbase
perfectbase / await.tsx
Last active July 1, 2025 13:42
Await component for tRPC with prefetch
/* eslint-disable @typescript-eslint/no-explicit-any */
import { type TRPCQueryOptions } from '@trpc/tanstack-react-query';
import { unstable_noStore } from 'next/cache';
import { Fragment, Suspense, type ReactNode } from 'react';
import { ErrorBoundary } from 'react-error-boundary';
import { HydrateClient, prefetch as prefetchTRPC } from '@/trpc/server';
type AwaitProps<T> =
| {
promise: Promise<T>;
@joseph-crowley
joseph-crowley / shadcn.md
Created March 17, 2025 19:58
initial solution for shadcn llms.txt

Below is a compressed yet complete reference for quickly integrating each shadcn component. Assumption: you already have the files from your question in @/components/ui/*.tsx and can import them directly. All components accept typical React props plus any Radix/3rd-party props. Adjust styling and props as needed.Do not rewrite any of the code for the shadcn components.


1. Accordion

Import

import {
  Accordion,
  AccordionItem,
@kalomaze
kalomaze / gist:37c70e022cb1e9428ebb1ee7a4b52275
Last active April 5, 2025 10:57
GRPO Reinforcement Learning - 7b GSM8k on 8xH100 / 8xA100
# the "verifiers" repository is a clean implementation of templated GRPO reinforcement learning training environments
# this is a generic set of "install from scratch" commands complete with a deepspeed z3 config that i have been using when i spin up nodes
# it will run on the gsm8k example w/ default batch size & generation size (8), and the 8th GPU is used for vllm generations
# qwen 14b full finetuning will run on this configuration too without LoRA or CUDA OOM, at least for the gsm8k task's context sizes + generation lengths
# hyperparameters are controlled by `verifiers/utils/config_utils.py`; i have been preferring extreme grad clipping (between 0.001 and 0.01) and low beta (under 0.01)
# NOTE FEB 27: examples have moved into `verifiers/examples` not `/examples`
cd /root
mkdir boom
@sayakpaul
sayakpaul / grade_images_with_gemini.py
Last active July 7, 2025 00:14
Shows how to use Gemini Flash 2.0 to grade images on multiple aspects like accuracy to prompt, emotional and thematic response, etc.
from google import genai
from google.genai import types
import typing_extensions as typing
from PIL import Image
import requests
import io
import json
import os