Skip to content

Instantly share code, notes, and snippets.

View sberryman's full-sized avatar
🏠
Working from home

Shaun Berryman sberryman

🏠
Working from home
View GitHub Profile
@WIStudent
WIStudent / Instructions.md
Last active November 16, 2018 08:15
Compiling DeepMatching's GPU version on Ubuntu 16.10

Compiling DeepMatching's GPU version on Ubuntu 16.10

DeepMatching is an algorithm that finds corresponding points in two images. Its GPU implementation was written for Fedora 21, which makes things a bit more difficult if you want to run it on an Ubuntu system. This document contains step-by-step instructions on how to get DeepMatching running on Ubuntu 16.10. I only tested it with Ubuntu 16.10, just let me know if it works with previous versions too.

To compile the GPU version you first need to compile the Caffe version that is included that comes with the DeepMatching files. Newer versions of Caffe won't work because Caffe changed the structure of its header files.

Compiling Caffe

Before compiling Caffe we need to make sure all its dependencies are installed. From the installation guide for Ubuntu 16.04/15.10:

sudo apt-get install build-essential cmake git pkg-config
@alper111
alper111 / vgg_perceptual_loss.py
Last active May 10, 2025 16:44
PyTorch implementation of VGG perceptual loss
import torch
import torchvision
class VGGPerceptualLoss(torch.nn.Module):
def __init__(self, resize=True):
super(VGGPerceptualLoss, self).__init__()
blocks = []
blocks.append(torchvision.models.vgg16(pretrained=True).features[:4].eval())
blocks.append(torchvision.models.vgg16(pretrained=True).features[4:9].eval())
blocks.append(torchvision.models.vgg16(pretrained=True).features[9:16].eval())
@CasiaFan
CasiaFan / ffmpeg_python_with_gpu_acceleration.py
Created September 8, 2019 16:35
Use pipe to read ffmped decoded video frames with NVIDIA GPU hardware acceleration
import subprocess as sp
import cv2
import numpy as np
from PIL import Image
import tensorflow as tf
ffmpeg_cmd_1 = ["./ffmpeg", "-y",
"-hwaccel", "nvdec",
"-c:v", "h264_cuvid",
"-vsync", "0",
@karpathy
karpathy / stablediffusionwalk.py
Last active April 30, 2025 22:39
hacky stablediffusion code for generating videos
"""
stable diffusion dreaming
creates hypnotic moving videos by smoothly walking randomly through the sample space
example way to run this script:
$ python stablediffusionwalk.py --prompt "blueberry spaghetti" --name blueberry
to stitch together the images, e.g.:
$ ffmpeg -r 10 -f image2 -s 512x512 -i blueberry/frame%06d.jpg -vcodec libx264 -crf 10 -pix_fmt yuv420p blueberry.mp4