Skip to content

Instantly share code, notes, and snippets.

View thushan's full-sized avatar

Thushan Fernando thushan

View GitHub Profile
@jwbee
jwbee / jq.md
Last active April 27, 2025 11:31
Make Ubuntu packages 90% faster by rebuilding them

Make Ubuntu packages 90% faster by rebuilding them

TL;DR

You can take the same source code package that Ubuntu uses to build jq, compile it again, and realize 90% better performance.

Setting

I use jq for processing GeoJSON files and other open data offered in JSON format. Today I am working with a 500MB GeoJSON file that contains the Alameda County Assessor's parcel map. I want to run a query that prints the city for every parcel worth more than a threshold amount. The program is

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
import os
import argparse
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--base_model_name_or_path", type=str)
@rain-1
rain-1 / llama-home.md
Last active April 24, 2025 06:41
How to run Llama 13B with a 6GB graphics card

This worked on 14/May/23. The instructions will probably require updating in the future.

llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)

Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.

It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.

  • Clone llama.cpp from git, I am on commit 08737ef720f0510c7ec2aa84d7f70c691073c35d.
@rain-1
rain-1 / LLM.md
Last active April 8, 2025 13:49
LLM Introduction: Learn Language Models

Purpose

Bootstrap knowledge of LLMs ASAP. With a bias/focus to GPT.

Avoid being a link dump. Try to provide only valuable well tuned information.

Prelude

Neural network links before starting with transformers.

Resurrecting NTC C.H.I.P. computers

Introduction

I bought four of the Next Thing Co. (defunct since 2018) C.H.I.P. (CHIP hereafter) computers shortly after their successful Kickstarter campaign of 2015. The CHIP computer is an interesting beast. It was positioned as a competitor the orginal Raspberry Pi and only cost US$8.00 before shipping. The Raspberry Pi cost US$25 at the time. The CHIP had some features that the Pi did not at the time including built-in wi-fi and battery charging. It was also considerably smaller the original Pi.

The CHIP was shipped with Debian 8 (Jessie) and was a capable Linux computer. I had big plans for all of them, but the

@1oh1
1oh1 / optiplex-3060-enable-pcie3.md
Last active April 16, 2025 01:31
Dell OptiPlex 3060 - Enable NVMe Gen 3 speeds (Enable PCIe 3.0)

Enable PCIe 3.0 speeds for NVMe SSDs on Dell OptiPlex 3060

Out of the box, any M.2 NVMe SSDs connected to the Dell OptiPlex 3060 runs at PCIe Gen 2.0 speeds (Max 5 GT/s; 2 GB/s) so the speed tests look like this:

screen1

However, after this BIOS mod, the SSD can reach PCIe Gen 3.0 speeds (Max 8 GT/s; 3.9 GB/s) so the speed tests look like this:

screen2

@X547
X547 / 0001-Haiku-fix-build-and-minimal-operation.patch
Last active January 1, 2022 07:50
Initial Wine patch for Haiku (wine-6.23). Clang is required for build.
From a94d195b2ef1188052d0b950bc889b5eb0dfc3dc Mon Sep 17 00:00:00 2001
From: X512 <[email protected]>
Date: Fri, 31 Dec 2021 08:17:23 +0900
Subject: Haiku: fix build and minimal operation
---
dlls/nsiproxy.sys/ndis.c | 10 ++++++++
dlls/ntdll/unix/file.c | 2 ++
dlls/ntdll/unix/loader.c | 4 +++
dlls/ntdll/unix/serial.c | 2 ++
@Paraphraser
Paraphraser / Docker+OctoPrint - When your 3D printer turns on and off.md
Last active March 24, 2025 10:53
IOTstack+OctoPrint: When your 3D printer turns on and off

octoprint-docker: when your 3D printer turns on and off

Task goals

  • Keep the OctoPrint Docker container service running even when your printer is switched off:

    • GCODE files can still be uploaded
    • Plugins can still be updated
@thushan
thushan / windows-terminal-settings.json
Last active February 27, 2022 14:02
My Windows Terminal Settings profile with elevated prompt for PowerShell via gsudo.
// latest version is now here:
// https://github.com/thushan/dotfiles/blob/main/windows/windows-terminal/settings.json
// TinyGo flight stick using analog joystick and 5 buttons
// Outputs data via serial port in very simple space-delimited format
// End of each line of data has "CR-LF" aka 0x13 0x10
// Each update is sent every 50 ms.
package main
import (
"image/color"
"machine"
"strconv"