Skip to content

Instantly share code, notes, and snippets.

View pietromarchesi's full-sized avatar

Pietro Marchesi pietromarchesi

  • University of Amsterdam
View GitHub Profile
import numpy as np
n_stim = 3
n_responses = 2
trial_size = 10
n_trials_per_stimtype = 50
n_neurons = 4
single_trial_data = np.zeros([n_trials_per_stimtype, n_neurons, n_stim, n_responses, trial_size])
single_trial_data.fill(np.NaN)
import numpy as np
n_stim = 3
n_responses = 2
trial_size = 10
n_trials_per_stimtype = 50
n_neurons = 84
single_trial_data = np.zeros([n_trials_per_stimtype, n_neurons, n_stim, n_responses, trial_size])
single_trial_data.fill(np.NaN)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
library(psych)
"
Analyze the difference between change and initial value.
Source:
Tu, Y. K. (2016). Testing the relation between percentage change
and baseline value. Scientific reports, 6.
"
@pietromarchesi
pietromarchesi / mutual_info.py
Created December 23, 2016 12:50 — forked from GaelVaroquaux/mutual_info.py
Estimating entropy and mutual information with scikit-learn
'''
Non-parametric computation of entropy and mutual-information
Adapted by G Varoquaux for code created by R Brette, itself
from several papers (see in the code).
These computations rely on nearest-neighbor statistics
'''
import numpy as np