Skip to content

Instantly share code, notes, and snippets.

View bjpcjp's full-sized avatar
💭
Fully caffeinated for your safety

brian piercy bjpcjp

💭
Fully caffeinated for your safety
View GitHub Profile
<svg width="10" height="10">
<rect x="0" y="0" width="10" height="10" fill="blue" />
</svg>
@bjpcjp
bjpcjp / shell-best-practices.sh
Last active July 31, 2024 13:12
shell script best practices template
#!/usr/bin/env bash
# source: https://sharats.me/posts/shell-script-best-practices/
set -o errexit
set -o nounset
set -o pipefail
if [[ "${TRACE-0}" == "1" ]]; then
set -o xtrace
fi
@bjpcjp
bjpcjp / d3js-bullet.html
Created February 10, 2024 17:25
D3.js - bullet chart
<!DOCTYPE html>
<meta charset="utf-8">
<style>
body {
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
margin: auto;
padding-top: 40px;
position: relative;
width: 800px;
}
@bjpcjp
bjpcjp / 10_oneliners.py
Created January 19, 2025 17:47
10 python one-liners (KD nuggets)
# https://www.kdnuggets.com/10-python-one-liners-change-coding-game
# 1. Lambda Functions
price_after_discount = lambda price: price*0.9
# 2. Map Operations on Lists
discounted_prices = list(map(price_after_discount, prices))

The compgen command in Linux is a shell built-in used primarily for generating possible completions for commands, functions, files, or other shell elements. It is part of the Bash shell and is often utilized in scripts for tab-completion or command suggestion mechanisms.

Syntax:

compgen [option] [word]

Key Features:

Group Benchmark Summary Explanation Link
English MMLU (EM) Measures multi-task learning across diverse knowledge domains to evaluate language models' general academic proficiency. MMLU Benchmark
English MMLU-Redux (EM) A reduced version of MMLU focusing on key topics or subsets of academic questions. No dedicated link available.
Having tried a few of the Qwen 3 models now my favorite is a bit of a surprise to me: I'm really enjoying Qwen3-8B.
I've been running prompts through the MLX 4bit quantized version, mlx-community/Qwen3-8B-4bit. I'm using llm-mlx like this:
llm install llm-mlx
llm mlx download-model mlx-community/Qwen3-8B-4bit
This pulls 4.3GB of data and saves it to ~/.cache/huggingface/hub/models--mlx-community--Qwen3-8B-4bit.