Skip to content

Instantly share code, notes, and snippets.

View Nikronic's full-sized avatar
🌌
Outlasting RNGs

Nikan Doosti Nikronic

🌌
Outlasting RNGs
View GitHub Profile
@skrymets
skrymets / Cornell's Notes Template.md
Last active February 23, 2025 13:03
Cornell Note Template for Obsidian
cssclass
cornell-note
Cues

Notes

The Cornell Note-taking System is a popular and effective method for organizing and summarizing information during lectures, readings, or any other form of learning.

@adefossez
adefossez / perlin-noise.ipynb
Last active August 25, 2023 19:13
Perlin noise.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@vadimkantorov
vadimkantorov / perlin.py
Last active December 27, 2024 23:48
Perlin noise in PyTorch
# ported from https://github.com/pvigier/perlin-numpy/blob/master/perlin2d.py
import torch
import math
def rand_perlin_2d(shape, res, fade = lambda t: 6*t**5 - 15*t**4 + 10*t**3):
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = torch.stack(torch.meshgrid(torch.arange(0, res[0], delta[0]), torch.arange(0, res[1], delta[1])), dim = -1) % 1

There are several approaches

  • Mount Google Drive in local Colab VM
  • Upload and download via browser
  • Use colab_util.py in python script
@diaoenmao
diaoenmao / patches.py
Last active January 7, 2023 17:00
Extract patches from images and recover orginal images from patches
def extract_patches_2d(img,patch_shape,step=[1.0,1.0],batch_first=False):
patch_H, patch_W = patch_shape[0], patch_shape[1]
if(img.size(2)<patch_H):
num_padded_H_Top = (patch_H - img.size(2))//2
num_padded_H_Bottom = patch_H - img.size(2) - num_padded_H_Top
padding_H = nn.ConstantPad2d((0,0,num_padded_H_Top,num_padded_H_Bottom),0)
img = padding_H(img)
if(img.size(3)<patch_W):
num_padded_W_Left = (patch_W - img.size(3))//2
num_padded_W_Right = patch_W - img.size(3) - num_padded_W_Left

There are several approaches

  • Mount Google Drive in local Colab VM
  • Upload and download via browser
  • Use colab_util.py in python script
@peteflorence
peteflorence / pytorch_bilinear_interpolation.md
Last active November 18, 2024 06:10
Bilinear interpolation in PyTorch, and benchmarking vs. numpy

Here's a simple implementation of bilinear interpolation on tensors using PyTorch.

I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).

For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).

In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle

@kevinzakka
kevinzakka / data_loader.py
Last active March 16, 2025 18:14
Train, Validation and Test Split for torchvision Datasets
"""
Create train, valid, test iterators for CIFAR-10 [1].
Easily extended to MNIST, CIFAR-100 and Imagenet.
[1]: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252/4
"""
import torch
import numpy as np
@RodolfoFerro
RodolfoFerro / DFT_2D.py
Created May 31, 2017 06:26
2D - DFT: 2D - Discrete Fourier Transform
# 2 Dimension Fourier Transform:
def FT_2D(X):
m, n = X.shape
return np.array([ [ sum([ sum([ X[i,j]*np.exp(-1j*2*np.pi*(k_m*i/m + k_n*j/n)) for i in range(m) ]) for j in range(n) ]) for k_n in range(n) ] for k_m in range(m) ])
@brannondorsey
brannondorsey / pix2pix_paper_notes.md
Last active January 3, 2022 09:57
Notes on the Pix2Pix (pixel-level image-to-image translation) Arxiv paper

Image-to-Image Translation with Conditional Adversarial Networks

Notes from arXiv:1611.07004v1 [cs.CV] 21 Nov 2016

  • Euclidean distance between predicted and ground truth pixels is not a good method of judging similarity because it yields blurry images.
  • GANs learn a loss function rather than using an existing one.
  • GANs learn a loss that tries to classify if the output image is real or fake, while simultaneously training a generative model to minimize this loss.
  • Conditional GANs (cGANs) learn a mapping from observed image x and random noise vector z to y: y = f(x, z)
  • The generator G is trained to produce outputs that cannot be distinguished from "real" images by an adversarially trained discrimintor, D which is trained to do as well as possible at detecting the generator's "fakes".
  • The discriminator D, learns to classify between real and synthesized pairs. The generator learns to fool the discriminator.
  • Unlike an unconditional GAN, both th