Skip to content

Instantly share code, notes, and snippets.

View chaseking's full-sized avatar
⛰️

Chase King chaseking

⛰️
View GitHub Profile
@sahirshahryar
sahirshahryar / CompactRomanNumeral.java
Last active March 31, 2023 04:12
Roman numeral conversion
public static String romanNumerals (int value) {
String[] values = {
"M", "CM", "D", "CD", "C", "XC",
"L", "XL", "X", "IX", "V", "IV", "I"
};
int[] correspondents = {
1000, 900, 500, 400, 100, 90,
50, 40, 10, 9, 5, 4, 1
}
@arxenix
arxenix / ImgMessage
Last active February 6, 2023 10:39
ImgMessage util class to send images to players with the chat bar!
package your-package;
import org.bukkit.ChatColor;
import org.bukkit.entity.Player;
import java.awt.*;
import java.awt.image.BufferedImage;
import java.util.HashMap;
/**
// https://github.com/Bukkit/CraftBukkit/blob/7e1ac0a77129b169704c1e222ff2deb3ab6cd2d2/src/main/java/net/minecraft/server/EntityPlayer.java#L596
//Method to open an anvil inventory to a player
public static void openAnvil(Player player, Inventory inventory){
//Get our EntityPlayer
EntityPlayer p = ((CraftPlayer) player).getHandle();
//Create the AnvilContainer
AnvilContainer container = new AnvilContainer(p);
@staltz
staltz / introrx.md
Last active May 12, 2025 23:22
The introduction to Reactive Programming you've been missing
@karpathy
karpathy / min-char-rnn.py
Last active May 12, 2025 17:28
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
@TengdaHan
TengdaHan / ddp_notes.md
Last active April 21, 2025 08:06
Multi-node-training on slurm with PyTorch

Multi-node-training on slurm with PyTorch

What's this?

  • A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
  • Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
  • Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
  • Warning: might need to re-factor your own code.
  • Warning: might be secretly condemned by your colleagues because using too many GPUs.