Skip to content

Instantly share code, notes, and snippets.

View abhshkdz's full-sized avatar

Abhishek Das abhshkdz

View GitHub Profile
<?
/////////////////////
// slack2html
// by @levelsio
/////////////////////
//
/////////////////////
// WHAT DOES THIS DO?
/////////////////////
//
@baraldilorenzo
baraldilorenzo / readme.md
Last active January 14, 2025 11:07
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

@kylemcdonald
kylemcdonald / CameraImage.cpp
Created November 23, 2015 15:30
openFrameworks app for sending images to disk for processing, and reading text back from disk. Used for "NeuralTalk and Walk".
#include "ofMain.h"
#include "ofxTiming.h"
class ofApp : public ofBaseApp {
public:
ofVideoGrabber grabber;
DelayTimer delay;
ofTrueTypeFont font;
string description;
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

# author: Aaditya Prakash
# NVIDIA-SMI does not show the full command, and when it was launched and its RAM usage.
# PS does but it does but you need PIDs for that
# lsof /dev/nvidia gives PIDs but only for the user invoking it
# usage:
# python programs_on_gpu.py
# Sample Output