Skip to content

Instantly share code, notes, and snippets.

View jwiegley's full-sized avatar

John Wiegley jwiegley

View GitHub Profile
*Prompt*: Hello
*Response*:
#+begin_reasoning
Thinking Process:
1. **Analyze the Request:**
* Input: "Hello" (in a new chat context).
* Constraint: "Respond in compressed, concise, semantics-only format. Prioritize brevity, key points only, minimal elaboration, no filler language. Maintain clarity, omit redundancy, and deliver essential details with maximum efficiency."
* Goal: Acknowledge the greeting efficiently while adhering to the strict style guidelines.
#!/usr/bin/env bash
#
# update-bitfiles — Update FPGA bitfiles on all BittWare cards via pho
#
# Usage:
# update-bitfiles <path-to-rbf-file>
# update-bitfiles /ice/svt/releases/archer/archer_agm_01.05.06.00_02.24.26/archer_agm_01.05.06.00.rbf
# update-bitfiles -c 5 <path-to-rbf-file> # update only card 5
# update-bitfiles --list-releases # browse available releases on ICE
# update-bitfiles --status # show current card/image status
"""Export gpt-oss-20b directly to .fx format using torch.export."""
import torch
from transformers import AutoModelForCausalLM
import os
# Suppress warnings
import warnings
warnings.filterwarnings('ignore')
print("=" * 80)
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE QuantifiedConstraints #-}
{-# LANGUAGE StandaloneDeriving #-}
(defun org-ext-chain-blockers-in-region (beg end)
"Chain tasks in region BEG to END with BLOCKER dependencies.
Each task blocked by previous task. Creates IDs if needed.
Returns count of tasks chained."
(interactive "r")
(unless (derived-mode-p 'org-mode)
(user-error "Not in org buffer"))
(save-excursion
(goto-char beg)
(let ((end-marker (copy-marker end))
rspamd: Add LiteLLM GPT integration
- **modules/services/rspamd.nix**:
- Define a new SOPS secret `litellm-vulcan-lan` owned by the `rspamd` user,
ensuring the secret is re‑loaded when the service restarts.
- Extend `serviceConfig.LoadCredential` to expose the LiteLLM API key to the
daemon alongside the existing controller password.
- Introduce a temporary “info” logging level for GPT debugging (previously
“warning”).
- Add a new `gpt.conf` override containing full LiteLLM proxy configuration
rspamd.nix: Add LLM-based spam detection via LiteLLM
- Add GPT integration for AI-powered spam classification using local LiteLLM
proxy (hera/gpt-oss-120b model at localhost:4000), with autolearn enabled to
feed GPT results back to Bayes classifier, custom X-GPT-Spam-Reason header
for transparency, 15s timeout, and 500 token completion limit
- Add SOPS secret configuration for litellm-vulcan-lan API key with rspamd
ownership, 0400 permissions, and service restart trigger

Setting up a new Yubikey

Ensure Yubikey works

Run gpg --card-status to make sure the Yubikey is seen.

#!/usr/bin/env bash
# claude-sandbox - Run Claude in a sandboxed firejail environment
# This script runs Claude in firejail with filesystem isolation
# while maintaining access to current directory and Claude configuration
set -euo pipefail
# Capture current environment
CURRENT_DIR="$(pwd)"
~ ❯ sudo /etc/nixos/scripts/email-tester.py
======================================================================
EMAIL PIPELINE TESTER
======================================================================
User: johnw
Started: 2025-11-07 12:50:29
✓ IMAP password loaded
======================================================================
TEST 1: Normal Email Delivery