Skip to content

Instantly share code, notes, and snippets.

View ollewelin's full-sized avatar

Olle Welin ollewelin

View GitHub Profile
@ollewelin
ollewelin / main.cpp
Last active July 7, 2017 17:16
Autoencoder unsupervised learning Relu tied weight and Supervised Learning Logistic Regression. MNIST test
///Now with fully connected Logistic Regression network Supervised Learning
///************* Parameters and things regarding fully connected network **************
int fully_conn_backprop =0;
const int C_fully_hidd_nodes = 200;
const int C_fully_out_nodes = 10;
int fully_hidd_nodes = C_fully_hidd_nodes;
int fully_out_nodes = C_fully_out_nodes;
int drop_out_percent = 50;/// 50% dropout percent hidden nodes during training
int verification = 0;
float Error_level=0.0f;
@ollewelin
ollewelin / gist:a90b8606311e9270fd66fc3bab5153c3
Last active June 11, 2017 20:20
More test options Stacked Autoencoder + Raspicam
///Stacked Autoencoder 2 layer L1 and L2
///TODO: Not yet show the image representation off each L2 feature projected on a L1 layer image representation
int Pause_cam =0;
#define USE_RASPICAM_INPUT //If you want to use raspicam input data
//#define USE_MNIST_DATABASE// Here read the t10k-images-idx3-ubyte file
//Here use real image from raspicam take random part 10x10 pixel and put in to the autoencoder
//test stacking autoencoder 2 layer 4x input same Layer 1 (L1) filter down to the Layer 2 (L2)
//L2 input nodes = 4x L1 nodes
@ollewelin
ollewelin / gist:d48ef585fe6c9b049d026abb91e027ac
Last active June 11, 2017 12:37
Stacked Autoencoder 2 layer L1 and L2, Deep Unsupervised Machine Learning, C++ Raspicam OpenCV
///Stacked Autoencoder 2 layer L1 and L2
///Hit <?> to read help menu.
///Now USE_IND_NOISE switch ON make more realistoc gabor filter like feature on first layer
///TODO: Not yet show the image representation off each L2 feature projected on a L1 layer image representation
int Pause_cam =0;
#define USE_RASPICAM_INPUT //If you want to use raspicam input data
//#define USE_MNIST_DATABASE// Here read the t10k-images-idx3-ubyte file
//Here use real image from raspicam take random part 10x10 pixel and put in to the autoencoder
//test stacking autoencoder 2 layer 4x input same Layer 1 (L1) filter down to the Layer 2 (L2)
@ollewelin
ollewelin / gist:b5b4c414523ff7b9698aa5b824ae12ba
Last active June 11, 2017 12:39
Autoencoder train 10x10 patches from realtime video raspicam on Raspberry pi
///Now USE_IND_NOISE switch ON make more realistoc gabor filter like feature
/// Fix Bugg replace hidden_node[j] and output_node[i] with Bias_level
// change_weight_in2hid[j] = (LearningRate/2) * hidden_node[j] * hid_node_delta[j] + Momentum * change_weight_in2hid[j];
// change_weight_hid2out[i] = (LearningRate/2) * output_node[i] * delta_pixel + Momentum * change_weight_hid2out[i];
///#define USE_LIM_BIAS//
//Here use real image from raspicam take random part 10x10 pixel and put in to the autoencoder
const int cam_h = 240;
@ollewelin
ollewelin / gist:fe43311536a70fe7a52f0ac16f87ab7f
Last active May 27, 2017 11:52
Raspberry pi Unsupervised learning Autoencoder with bias node
///Now also Add bias nodes showed in the last 2 patches
const float Bias_level = 1.0f;
const float Bias_w_n_range = -1.0f;
const float Bias_w_p_range = 1.0f;
const float change_bias_weight_range = 0.2f;
//Add visualize hidden nodes
//Add so you can save weight by press <S>
//Autoencoder learning test with raspicam
@ollewelin
ollewelin / gist:92eaae7905d7d123fadc97d5c7061e90
Last active May 25, 2017 21:30
Autoencoder test with MNIST 10k dataset traning digits
//Add visualize hidden nodes
//Add so you can save weight by press <S>
//Autoencoder learning test with raspicam
//press <Y> at start It's tested with dataset from
/// t10k-images-idx3-ubyte
/// http://yann.lecun.com/exdb/mnist/
//If you comment out USE_MNIST_DATABASE switch then simpler pattern used from a picture https://blog.webkid.io/datasets-for-machine-learning/
//0..9 28x28 pixel digits pattern mnist.png filname 289x289 orginal
//orginal image taken from
@ollewelin
ollewelin / gist:5d88ab7406690ec7414be568004e83b0
Last active May 24, 2017 15:58
Autoencoder test on MNIST data
//Test2 improved and also show (visualize) the noise input stimulu
//Autoencoder learning test with raspicam
//press <Y> at start It's tested with 0..9 28x28 pixel digits pattern mnist.png filname 289x289 orginal
//orginal image taken from
// https://blog.webkid.io/datasets-for-machine-learning/
//#define USE_RASPICAM_INPUT //If you want to use raspicam input data
#include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
@ollewelin
ollewelin / gist:dd94fc45a99b3fb34a1479270716dc23
Last active May 24, 2017 12:58
Autoencoder neural network raspberry pi
//Autoencoder learning test with raspicam
//press <Y> at start It's tested with 0..9 28x28 pixel digits pattern mnist.png filname 289x289 orginal
//orginal image taken from
// https://blog.webkid.io/datasets-for-machine-learning/
//#define USE_RASPICAM_INPUT //If you want to use raspicam input data
#include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
#include <opencv2/imgproc/imgproc.hpp> // Gaussian Blur
@ollewelin
ollewelin / gist:a29ce1a65a25df89a9f7de3e16a000d5
Created May 6, 2017 07:42
Image transforming tool to produce more training and verify images
//This code take some posXXX.jpg and verXXX.jpg files in program root dir and tranform images to a folder with name \positive_data\posXXX.jpg verXXX.jpg
//#include <stdio.h>
//#include <unistd.h>
//#include <ctime>
//#include <iostream>
//#include <raspicam/raspicam_cv.h>
#include <opencv2/highgui/highgui.hpp> // OpenCV window I/O
#include <opencv2/imgproc/imgproc.hpp> // Gaussian Blur
//2017-05-11 fix bugg index was wrong before m32_conv0 = convolute_mat2(&Feature0Kernel[31][0], FE0KSIZESQR, FE0KSIZESQR, m1_0_padded, int (m1_0_padded.cols));// Make a convolution of the image
//2017-05-05 32x32x32 feature stright connection
//2017-05-05 FIX so kernel update weight direct after each pixel step (before there was a sum up all steps togheter dont work prorper). So now the kerenle patches traingin much much faster and better
//the kernels feature now adapt and look's much more like it correspond to the traingin images.
//Add dropout on fully connected HiddenNodes prevent overtraning
//2017-01-17 fix bugg in make_SumChangeFeature0Weights replace c_m1 with m1_conv0
//Ask of loading any reruned training turn
//Rerun fully connected weights (10 times) and lock kernel layer 0 after (2) rerun lock kernel layer 1 after (4) reruns
//Show all kernels in 3 windows
//In this example 6 output nodes and explaned training images set in start of program.