Skip to content

Instantly share code, notes, and snippets.

View Coderx7's full-sized avatar
💭
In God we Trust!

Seyyed Hossein Hasanpour Coderx7

💭
In God we Trust!
  • IRAN
  • 20:20 (UTC +03:30)
View GitHub Profile
@Coderx7
Coderx7 / maxout_layer
Created May 9, 2016 18:28 — forked from erogol/maxout_layer
maxout layer implementation for caffe library
layers {
name: "conv1A"
type: CONVOLUTION
bottom: "data"
top: "conv1A"
blobs_lr: 1
blobs_lr: 2
weight_decay: 1
weight_decay: 0
convolution_param {
@Coderx7
Coderx7 / Caffe + Ubuntu 12.04 64bit + CUDA 6.5 配置说明.md
Created May 18, 2016 13:54 — forked from bearpaw/Caffe + Ubuntu 12.04 64bit + CUDA 6.5 配置说明.md
Caffe + Ubuntu 12.04 / 14.04 64bit + CUDA 6.5 / 7.0 配置说明

Caffe + Ubuntu 12.04 64bit + CUDA 6.5 配置说明

本步骤能实现用Intel核芯显卡来进行显示, 用NVIDIA GPU进行计算。

1. 安装开发所需的依赖包

安装开发所需要的一些基本包

sudo apt-get install build-essential
sudo apt-get install vim cmake git
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev
@Coderx7
Coderx7 / CaffePlot.py
Created June 6, 2016 04:10
a plot script for caffe to show loss/training curves
# In the name of GOD the most compassionate the most merciful
# Originally developed by Yasse Souri
# Just added the search for current directory so that users dont have to use command prompts anymore!
# and also shows the top 4 accuracies achieved so far, and displaying the highest in the plot title
# Coded By: Seyyed Hossein Hasan Pour ([email protected])
# -------How to Use ---------------
# 1.Just place your caffe's traning/test log file (with .log extension) next to this script
# and then run the script.If you have multiple logs placed next to the script, it will plot all of them
# you may also copy this script to your working directory, where you generate/keep your train/test logs
# and easily execute the script and see the curve plotted.
@Coderx7
Coderx7 / Caffe_Convnet_ConfuxionMatrix.py
Last active November 19, 2018 10:02 — forked from axel-angel/convnet_test.py
Caffe script to compute accuracy and confusion matrix, Added mean subtraction. it now accurately reports the accuracy (just like caffe)
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Author: Axel Angel, copyright 2015, license GPLv3.
# added mean subtraction so that, the accuracy can be reported accurately just like caffe when doing a mean subtraction
# Seyyed Hossein Hasan Pour
# [email protected]
# 7/3/2016
import sys
@Coderx7
Coderx7 / confusionMatrix_Recall_Precision_F1Scroe_Caffe.py
Last active February 24, 2021 13:48
Confusion Matrix with Recall, Precision and F1-Score for Caffe
#!/usr/bin/python
# Author: SeyyedHossein Hasanpour copyright 2017, license GPLv3.
# Seyyed Hossein Hasan Pour:
# [email protected]
# Changelog:
# 2015:
# initial code to calculate confusionmatrix by Axel Angel
# 7/3/2016:(adding new features-by-hossein)
# added mean subtraction so that, the accuracy can be reported accurately just like caffe when doing a mean subtraction
@Coderx7
Coderx7 / mean-image-batch.py
Created January 12, 2017 15:11
a simple script for calculating the mean for a batch of images in a semi-vectorized version for situations where the memory is not large enough to accommodate all data and you need to loop through them!
#English: a simple handy snippet which I specifically wrote for calculating mean for a batch of images,
#in semi-vectorized and unvectorized fashion, along with the fully numpy example to test the output!
#
#Farsi:
#mohasebe mean batchi az tasavir be sorate vectorized, unvectorized and semi vectorized
#age version vectorized error kambod hafeze dad, behtare az semi vectorized estefade beshe
#chon unvectorized ya hamoon loop mamoli sooratesh kheyli kame.
#[email protected]
#Seyyed Hossein Hasanpour
#1/12/2017 6:47 pm
@Coderx7
Coderx7 / calculate_std.py
Last active January 12, 2017 18:37
a non-vectorized and semi-vectorized implementation for calculating std for a batch of images. the semi-vectorized is the one, one should use, since its as fast as the numpy.std
#In the name of God, the most compassionate the most merciful
#a non-vectorized and semi-vectorized implementation for calculating std for a batch of images.
#the semi-vectorized is the one, one should use, since its as fast as the numpy.std
#Seyyed Hossein Hasan pour
#[email protected]
#1/12/2017
import math
#unvectorized version --really slow!
def calc_std_classic(a):
#sum all elements in each channel and divide by the number of elements
@Coderx7
Coderx7 / mean-std.py
Last active January 12, 2017 18:05
mean and std calculation with non-vectorized and semi-vectorized implementation, good candidates for times numpy.mean and numpy.std does not work! because of memory issues
#In the name of God, the most compassionate the most merciful
import math
import numpy as np
#English: a simple handy snippet which I specifically wrote for calculating mean for a batch of images,
#in semi-vectorized and unvectorized fashion, along with the fully numpy example to test the output!
#Farsi:
#mohasebe mean batchi az tasavir be sorate vectorized, unvectorized and semi vectorized
#age version vectorized error kambod hafeze dad, behtare az semi vectorized estefade beshe
#chon unvectorized ya hamoon loop mamoli sooratesh kheyli kame.
#[email protected]
@Coderx7
Coderx7 / cifar10-normalize.py
Created January 12, 2017 18:30
CIfar10-lmdb-zeropad-normalize script for caffe
#in the name of God, the most compassionate the most merciful
#Seyyed Hossein Hasanpour
#[email protected]
#script for zeropadding and normalizing CIFAR10 dataset (can also be used for CIFAR100)
import math
import caffe
import lmdb
import numpy as np
from caffe.proto import caffe_pb2
import cv2
@Coderx7
Coderx7 / CIFAR10_Pylearn2_To_LMDB_Convertor.py
Created February 14, 2017 11:01
This is the script which I wrote to convert the cifar10/100 (gcn,whitened) datasets from pylearn2 to lmdb.
#in the name of GOD
#pylearn2 cifar10 convertor to lmdb
#by:Seyyed Hossein Hasanpour
#[email protected]
#2/14/2017
import numpy as np
import cPickle
import lmdb
import caffe
from caffe.proto import caffe_pb2