Skip to content

Instantly share code, notes, and snippets.

View dschaehi's full-sized avatar
🎯
Focusing

Jae Hee Lee dschaehi

🎯
Focusing
View GitHub Profile
@dschaehi
dschaehi / update_all_conda-forge_environments.sh
Last active October 28, 2024 15:06
A script for updating all conda-forge environments
#!/usr/bin/env sh
comment_box() {
msg="$1"
msg_length=${#msg}
border_width=$((msg_length + 4)) # Padding for spaces around message
# Generate the top and bottom border using the chosen comment character
border=$(printf '%*s' "$border_width" '') # Create empty string of width
border=$(echo "$border" | tr ' ' '#') # Replace spaces with '#'
@dschaehi
dschaehi / update_ollama.sh
Last active October 28, 2024 15:06
A script for updating ollama manually installed without the sudo privilige
#!/usr/bin/env sh
client_version=$(ollama -v | grep -oP 'version is \K[0-9.]+')
latest_version=$(git ls-remote --tags --sort="v:refname" https://github.com/ollama/ollama.git | grep -E 'refs/tags/v?[0-9]+\.[0-9]+\.[0-9]+$' | tail -n1 | sed 's/.*\/v*//')
echo Client version is $client_version
echo Latest version is $latest_version
if [ "$(printf '%s\n' "$client_version" "$latest_version" | sort -V | head -n1)" != "$latest_version" ]; then
OLLAMA_CACHE="$XDG_CACHE_HOME/ollama"
OLLAMA="$XDG_OPT_HOME/ollama"
mkdir -p $OLLAMA_CACHE
@dschaehi
dschaehi / auctex-latexmk-fix.el
Last active September 5, 2024 06:40
Fix Emacs AucTeX LaTeXMk not reverting the PDF buffer automatically.
(defun advice-after-TeX-TeX-sentinel (&rest args)
(unless (TeX-error-report-has-errors-p)
(run-hook-with-args 'TeX-after-compilation-finished-functions
(with-current-buffer TeX-command-buffer
(expand-file-name
(TeX-active-master (TeX-output-extension)))))))
(advice-add 'TeX-TeX-sentinel :after #'advice-after-TeX-TeX-sentinel)
@dschaehi
dschaehi / better_bibtex.js
Last active March 18, 2025 10:42
Zotero Better BibLaTeX script for arXiv and conference papers with missing booktitle
if (Translator.BetterBibTeX) {
if (tex.entrytype === "unpublished" && !tex.has.note) {
tex.add({ name: "note", value: "unpublished manuscript" });
}
if (tex.entrytype === "misc" && zotero.arXiv) {
if (!tex.has.misc) tex.entrytype = "article";
if (tex.has.number) tex.remove("number");
if (tex.has.journal) tex.remove("journal");
tex.add({
name: "journal",
@dschaehi
dschaehi / actions-zotero.yml
Last active August 31, 2024 11:59
Actions and Tags for Zotero
type: ActionsTagsBackup
author: jaeheelee
platformVersion: 7.0.3
pluginVersion: 2.0.0
timestamp: '2024-08-31T11:59:12.270Z'
actions:
1715690567316-0F2mzqBj:
event: 0
operation: 4
data: >
@dschaehi
dschaehi / commenting.sty
Created July 11, 2023 20:46
A LaTeX package for commenting. It allows line breaks and environments inside comments.
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{commenting}[2022/10/29 by Jae Hee Lee (http://jaeheelee.de)]
\RequirePackage{hyperref}
\RequirePackage{ifdraft}
\RequirePackage{graphicx}
\RequirePackage{xcolor}
\RequirePackage{xspace}
\RequirePackage{environ}
\RequirePackage{soul}
\RequirePackage{twemojis}
@dschaehi
dschaehi / gradient_accumulation.py
Created September 28, 2022 15:40 — forked from thomwolf/gradient_accumulation.py
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...
def flatten_json(json):
if type(json) == dict:
for k, v in list(json.items()):
if type(v) == dict:
flatten_json(v)
json.pop(k)
for k2, v2 in v.items():
json[k+"."+k2] = v2
@dschaehi
dschaehi / hamming_score.py
Created November 6, 2021 10:32
The Hamming score in PyTorch
import torch
def hamming_score(pred, answer):
out = ((pred & answer).sum(dim=1) / (pred | answer).sum(dim=1)).mean()
if out.isnan():
out = torch.tensor(1.0)
return out
answer = torch.tensor([[0, 1, 0], [0, 1, 1], [1, 0, 1], [0, 0, 1]])
@dschaehi
dschaehi / extract_features.py
Last active March 15, 2022 21:27
Extracting ResNet Features Using PyTorch
from collections import OrderedDict
from torchvision import models
def gen_feature_extractor(model, output_layer):
layers = OrderedDict()
for (k, v) in model._modules.items():
layers[k] = v
if k == output_layer: