Skip to content

Instantly share code, notes, and snippets.

@idleberg
idleberg / vscode-macos-context-menu.md
Last active August 9, 2025 21:46
“Open in Visual Studio Code” in macOS context-menu

Open in Visual Studio Code

  • Open Automator
  • Create a new document
  • Select Quick Action
  • Set “Service receives selected” to files or folders in any application
  • Add a Run Shell Script action
    • your default shell should already be selected, otherwise use /bin/zsh for macOS 10.15 (”Catalina”) or later
    • older versions of macOS use /bin/bash
  • if you're using something else, you probably know what to do 😉
@ttesmer
ttesmer / AD.hs
Last active October 29, 2024 15:35
Automatic Differentiation in 38 lines of Haskell using Operator Overloading and Dual Numbers. Inspired by conal.net/papers/beautiful-differentiation
{-# LANGUAGE TypeSynonymInstances #-}
data Dual d = D Float d deriving Show
type Float' = Float
diff :: (Dual Float' -> Dual Float') -> Float -> Float'
diff f x = y'
where D y y' = f (D x 1)
class VectorSpace v where
zero :: v
@rain-1
rain-1 / llama-home.md
Last active June 24, 2025 11:12
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.