Skip to content

Instantly share code, notes, and snippets.

View faymek's full-sized avatar

Faymek Feng faymek

  • Shanghai Jiao Tong University
  • Shanghai
View GitHub Profile
@Quasimondo
Quasimondo / rgb2yuv_yuv2rgb.py
Last active November 9, 2024 20:58
RGB to YUV and YUV to RGB conversion for Numpy
import numpy as np
#input is a RGB numpy array with shape (height,width,3), can be uint,int, float or double, values expected in the range 0..255
#output is a double YUV numpy array with shape (height,width,3), values in the range 0..255
def RGB2YUV( rgb ):
m = np.array([[ 0.29900, -0.16874, 0.50000],
[0.58700, -0.33126, -0.41869],
[ 0.11400, 0.50000, -0.08131]])
@specter119
specter119 / zot_rm_empty_folders.py
Last active February 1, 2023 07:44
remove empty folders in `storage`
#!/usr/bin/env python
# coding: utf-8
from __future__ import print_function
import configparser
import re
import shutil
import sys
@madebyollin
madebyollin / useful_nn_concepts.md
Created January 12, 2025 05:35
Useful neural network training concepts (narrow usage, broad applicability)

These useful concepts show up in specific areas of NN-training literature but can be applied pretty broadly.

  1. Non-leaky augmentations: you can add arbitrary augmentations during training, without substantially biasing in-domain performance, by adding a secondary input that tells the network which augmentations were used. This technique shows up the Karras et al image generation papers (ex. https://arxiv.org/pdf/2206.00364) but it's applicable whenever you want good performance on limited data.
  2. Batch-stratified sampling: rather than generating per-sample random numbers with e.g. torch.rand(batch_size), you can use th.randperm(batch_size).add_(th.rand(batch_size)).div_(batch_size) instead, which has the same distribution but lower variance, and therefore trains more stably. This shows up in k-diffusion https://github.com/crowsonkb/k-diffusion/commit/a2b7b5f1ea0d3711a06661ca9e41b4e6089e5707, but it's applicable whenever you're randomizing data across the batch axis.
  3. Replay buffers: when y