Skip to content

Instantly share code, notes, and snippets.

@stelar7
stelar7 / run_svg_tests.py
Last active November 16, 2024 14:17
Requires a running instance of WebDriver on port 4444 (and ImageMagick)
#!/usr/bin/env python3
from collections import namedtuple
from pathlib import Path
import base64
import httplib2
import json
import os
import subprocess
@adtac
adtac / README.md
Last active October 27, 2024 08:53
Using your Kindle as an e-ink monitor

3.5 fps, Paperwhite 3
@adtac_

step 1: jailbreak your Kindle

mobileread.com is your best resource here, follow the instructions from the LanguageBreak thread

I didn't really follow the LanguageBreak instructions because I didn't care about most of the features + I was curious to do it myself, but the LanguageBreak github repo was invaluable for debugging

https://github.com/circl-lastname/LBSync
@aras-p
aras-p / metal_shader_compiler_cache_location.md
Last active October 29, 2024 00:46
Apple Metal Shader Compiler Cache location

As per gfx-rs/gfx#3716 (comment) :

macOS has a system shader cache at $(getconf DARWIN_USER_CACHE_DIR)/com.apple.metal

On my MacBookPro that is under /var/folders/52/l9z1nqld5yg99tb_s3q6nyhh0000gn/C

  • System shaders: /var/folders/52/l9z1nqld5yg99tb_s3q6nyhh0000gn/C/com.apple.metal
  • Blender shaders: /var/folders/52/l9z1nqld5yg99tb_s3q6nyhh0000gn/C/org.blenderfoundation

Delete all the folders in there to clear the cache.

@saagarjha
saagarjha / file_drain.c
Created November 11, 2023 10:01
"Drain" files while they are processed to reduce free disk space requirements
// Sometimes you have a large file on a small disk and would like to "transform"
// it in some way: for example, by decompressing it. However, you might not have
// enough space on disk to keep both the the compressed file and the
// decompressed results. If the process can be done in a streaming fashion, it
// would be nice if the file could be "drained"; that is, the file would be
// sequentially deleted as it is consumed. At the start you'd have 100% of the
// original file, somewhere in the middle you'd have about half of the original
// file and half of your output, and by the end the original file will be gone
// and you'll be left with just the results. If you do it this way, you might
// be able to do the entire operation without extra space!
@mped-oticon
mped-oticon / bash_parallel.source
Created August 21, 2023 11:55
Classic Fork-join parallelism in BASH, blocking and nestable
# Classic Fork-join parallelism. bash_parallel's can be nested arbitrarily
# Silent by default; set BASH_PARALLEL_VERBOSE=1 for verbose output on stderr
function bash_parallel
{
function bash_parallel_echo
{
if [[ $BASH_PARALLEL_VERBOSE == 1 ]] ; then
echo "$@" 1>&2
fi
}
@veekaybee
veekaybee / normcore-llm.md
Last active November 15, 2024 12:06
Normcore LLM Reads

Anti-hype LLM reading list

Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Foundational Concepts

Screenshot 2023-12-18 at 10 40 27 PM

Pre-Transformer Models

@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active August 15, 2024 07:10
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@AndrewRadev
AndrewRadev / matchfuzzy.vim
Last active May 18, 2024 15:18
A fuzzy-finder in 40 lines of portable Vimscript
" Pick a different highlighting group by adding this in your vimrc:
"
" highlight link fuzzyMatch <group>
"
" The "default" only makes the link if it's not already set.
"
highlight default link fuzzyMatch Search
" The components of the command definition:
"
@Theldus
Theldus / README.md
Last active October 6, 2024 22:54
Helping your 'old' PC build faster with your mobile device (no root required)

Helping your 'old' PC build faster with your mobile device

It all happened when I decided to run Geekbench 5 on my phone: surprisingly the single-core performance matched my 'old'¹ Pentium T3200 and surpassed it in multicore. Since I've been having fun with distcc for the last few days, I asked myself: 'Can my phone really help my old laptop build faster? nah, hard to believe... but let's try'.

Without further ado: YES. Not only can my phone be faster, but it can significantly help in the build process, I believe the results below speak for themselves:

asciicast

Building Git (#30cc8d0) on a Pentium T3200, 8m30s

asciicast Building Git (#30cc8d0) on a Pentium T3200 (2x 2.0 GHz)+ Snapdragon 636 (4x1.8 + 4x1.6 GHz), 2m9s