Skip to content

Instantly share code, notes, and snippets.

View lmmx's full-sized avatar
💡
lights, camera, action

Louis Maddox lmmx

💡
lights, camera, action
View GitHub Profile
@lmmx
lmmx / README.md
Last active April 4, 2026 13:39
Calculate change in rank for PyPI dependencies since 2020 to 2026, and from 2024 to 2026

Calculate change in rank for PyPI dependencies since 2020 to 2026, and from 2024 to 2026

Uses rankings of direct dependencies for all PyPI packages that were first released in each year range which are pre-computed in the gists (see get_gists.sh to download)

@lmmx
lmmx / pypi_top_500_packages_tp.csv
Created March 26, 2026 11:59
Sublist from the top 500 packages on PyPI which use Trusted Publishing with the pypa/gh-action-pypi-publish action
We can make this file beautiful and searchable if this error is corrected: Unclosed quoted field in line 7.
package,github_repo,uses_trusted_publishing,has_pypi_publish_action,has_id_token_write,tp_signals,publishing_workflow,pinning_status,sha_pinned_actions,total_actions,workflow_count,source,error
certifi,certifi/python-certifi,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publish-no-password",release.yml,ALL_SHA,5,5,1,cache,
typing-extensions,python/typing_extensions,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publish-no-password",publish.yml,MIXED,11,37,3,cache,
idna,kjd/idna,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publish-no-password",deploy.yml,ALL_SHA,9,9,1,cache,
charset-normalizer,jawah/charset_normalizer,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publish-no-password",cd.yml,MIXED,38,43,4,cache,
pip,pypa/pip,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publish-no-password",release.yml,MIXED,4,21,4,cache,
click,pallets/click,True,True,True,"id-token:write, pypa/gh-action-pypi-publish, pypi-publis
@lmmx
lmmx / pypi_top_500_packages_no_tp.csv
Created March 26, 2026 11:09
Sublist from the top 500 packages on PyPI which do not use Trusted Publishing
rank package repo pinning tp publishing_wf
1 boto3 boto/boto3 MIXED False
2 packaging pypa/packaging ALL_SHA False
3 urllib3 urllib3/urllib3 ALL_SHA False
4 setuptools pypa/setuptools NONE False
6 requests psf/requests ALL_SHA False
8 botocore boto/botocore MIXED False
11 aiobotocore aio-libs/aiobotocore ALL_SHA False
12 python-dateutil dateutil/dateutil NONE False publish.yml
13 six benjaminp/six NONE False
@lmmx
lmmx / prompt.md
Created March 6, 2026 16:27
Claude prompt style guide for instructive guide

Response Style Instructions

You are directive. When I give you a list of tasks or describe what I need to do, your job is to triage by time sensitivity and then tell me what to do in what order. Give commands, not suggestions. Say "do X" not "you could do X" or "you might want to do X."

You are calm. Do not use urgency language like "right now", "immediately", "ASAP", or "hurry." I don't have a procrastination problem — I have a sequencing and prioritisation problem. Your commands should feel like a composed, clear-headed boss delegating, not a drill sergeant.

You do not hedge inside commands. Once you've decided something belongs in the sequence, commit to it. "Do A, then do B" — not "do A, and then maybe B if you feel up to it." Hedging inside a command undermines the whole point of giving one.

You are not curt. Being directive does not mean being short with me. You can explain your reasoning for the triage order, flag subtasks I mentioned but might forget, and generally be warm. Just

@lmmx
lmmx / table.md
Created February 13, 2026 14:01
Review of token classifier model
Category Oral / Literate Avg F1 (≈) Individual Markers (F1) Comment on Why / Causes Verdict
Address & Interaction Oral 0.604 vocative (.675), imperative (.606), second_person (.549), inclusive_we (.608), rhetorical_question (.661), phatic_check (.634), phatic_filler (.495) Strong lexical and syntactic cues; short-range dep
@lmmx
lmmx / demo_academic_hedged.py
Last active February 13, 2026 13:27
Token classifier demo
import json
import torch
from huggingface_hub import hf_hub_download
from transformers import AutoModel, AutoTokenizer
def main():
model_name = "HavelockAI/bert-token-classifier"
marker precision recall f1-score support
B-literate_list_structure 0.975 0.75 0.848 52
O 0.771 0.847 0.807 37244
B-oral_imperative 0.753 0.805 0.778 87
B-literate_footnote_reference 0.81 0.739 0.773 23
B-oral_rhetorical_question 0.649 0.809 0.72 89
I-literate_technical_abbreviation 0.687 0.731 0.709 108
B-oral_inclusive_we 0.603 0.793 0.685 266
I-literate_footnote_reference 0.57 0.821 0.673 84
I-oral_rhetorical_question 0.646 0.683 0.664 840