Skip to content

Instantly share code, notes, and snippets.

View dimchansky's full-sized avatar

Dmitrij Koniajev dimchansky

View GitHub Profile

Run go install and

  • gogitlocalstats -add /path/to/folder will scan that folder and its subdirectories for repositories to scan
  • gogitlocalstats -email your@email.com will generate a CLI stats graph representing the last 6 months of activity for the passed email. You can configure the default in main.go, so you can run gogitlocalstats without parameters.

Being able to pass an email as param makes it possible to scan repos for collaborators activity as well.

License: CC BY-SA 4.0

@matsui528
matsui528 / l2sqr_functions.cpp
Last active October 23, 2023 05:17
Runtime evaluation for squared Euclidean distances with SSE, AVX, AVX512 implementations
#include <iostream>
#include <random>
#include <chrono>
#include <x86intrin.h>
#include <cassert>
// Runtime evaluation for squared Eucliden distance functions
// - fvec_L2_sqr_ref: naive reference impl from Faiss
// - fvec_L2_sqr_sse: SSE impl from Faiss
// - fvec_L2_sqr_avx: AVX impl from Faiss
@ericjster
ericjster / writebinary_test.go
Created May 16, 2019 07:21
Golang write binary file using memory mapped structs
package main
// Example of writing a binary file of float structs.
import (
"log"
"os"
"reflect"
"testing"
"unsafe"
@anteo
anteo / FAR2L.itermkeymap
Last active November 10, 2021 22:11
iTerm2 key mappings for FAR2L
{"Touch Bar Items":[],"Key Mappings":{"0x35-0x40000-0x17":{"Label":"","Action":10,"Text":"[55;7~"},"0xf72d-0x80000-0x79":{"Label":"","Text":"[6;3~","Action":10},"0x28-0x60000-0x19":{"Label":"","Action":10,"Text":"[59;6~"},"0xf708-0x40000-0x60":{"Label":"","Text":"[15;5~","Action":10},"0xf706-0x20000-0x63":{"Label":"","Text":"O2R","Action":10},"0x23-0x60000-0x14":{"Label":"","Action":10,"Text":"[53;6~"},"0xf70d-0x40000-0x6d":{"Label":"","Text":"[21;5~","Action":10},"0xf708-0x20000-0x60":{"Label":"","Text":"[15;2~","Action":10},"0x38-0x80000-0x1c":{"Text":"8","Label":"","Action":10},"0xf706-0x40000-0x63":{"Label":"","Text":"O5R","Action":10},"0x34-0x80000-0x15":{"Text":"4","Label":"","Action":10},"0xf703-0x240000-0x7c":{"Label":"","Text":"[1;5C","Action":10},"0xf72d-0x40000-0x79":{"Label":"","Text":"[6;5~","Action":10},"0xf70d-0x20000-0x6d":{"Label":"","Text":"[21;2~","Action":10},"0xf702-0x240000-0x7b":{"Label":"","Text":"[1;5D","Action":10},"0xf700-0x220000-0x7e":{"Label":"","Text":"[2A","Action":10},"0xd-0x4
@Corsario-CL
Corsario-CL / autoexec.cfg
Created April 3, 2021 01:08
Quake3 – Custom high definition configurations for best visual quality (Quake 3, Quake III Arena).
seta r_mode "-1"
seta r_customwidth "5120"
seta r_customheight "2880"
seta cg_fov "115"
seta cg_gunCorrectFOV "1"
seta cl_renderer "opengl2"
seta r_allowSoftwareGL "0"
seta r_ignoreGLErrors "1"
seta r_smp "1"
seta r_displayrefresh "0"
@danirukun
danirukun / whisper-transcribe.bash
Last active September 2, 2025 08:37
Transcribe (and translate) any VOD (e.g. from Youtube) using Whisper from OpenAI and embed subtitles!
#!/usr/bin/env bash
# Small shell script to more easily automatically download and transcribe live stream VODs.
# This uses YT-DLP, ffmpeg and the CPP version of Whisper: https://github.com/ggerganov/whisper.cpp
# Use `./transcribe-vod help` to print help info.
# MIT License
# Copyright (c) 2022 Daniils Petrovs
@harryaskham
harryaskham / server.hs
Created March 16, 2023 14:50
GPT-4 Written ChatGPT WebApp in Haskell
{- cabal:
build-depends: base
, scotty
, aeson
, http-client-tls
, http-client
, bytestring
, text
, http-types
-}

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.