Created
April 21, 2016 16:09
-
-
Save idontgetoutmuch/43c0c41e18231f271e044b247b73a1aa to your computer and use it in GitHub Desktop.
Lotka-Volterra
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
T <- 20 | |
t0 <- 1900 | |
y <- | |
structure(c(47.2, 70.2, 77.4, 36.3, 20.6, 18.1, 21.4, 22, 25.4, 27.1, 40.3, 57, | |
76.6, 52.3, 19.5, 11.2, 7.6, 14.6, 16.2, 24.7, 6.1, 9.8, 35.2, 59.4, 41.7, 19, | |
13, 8.3, 9.1, 7.4, 8, 12.3, 19.5, 45.7, 51.1, 29.7, 15.8, 9.7, 10.1, 8.6), | |
.Dim = c(20, 2)) | |
ts <- | |
c(1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, | |
1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920) | |
y0 <- c(30, 4) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
make examples/lv/lv | |
./examples/lv/lv sample num_samples=10 num_warmup=10 algorithm=hmc stepsize=0.001 data file=examples/lv/lv.data.R | |
./examples/lv/lv sample num_samples=10 num_warmup=10 algorithm=hmc stepsize=0.001 data file=examples/lv/lv.data.R | |
./examples/lv/lv sample num_samples=1000 num_warmup=1000 algorithm=hmc stepsize=0.001 data file=examples/lv/lv.data.R | |
./bin/stansummary output.csv | |
Inference for Stan model: lv_model | |
1 chains: each with iter=(1000); warmup=(0); thin=(1); 1000 iterations saved. | |
Warmup took (149) seconds, 2.5 minutes total | |
Sampling took (139) seconds, 2.3 minutes total | |
Mean MCSE StdDev 5% 50% 95% N_Eff N_Eff/s R_hat | |
lp__ -91 1.2e-01 1.8e+00 -94 -90 -89 229 1.7e+00 1.0e+00 | |
accept_stat__ 0.93 2.9e-03 9.2e-02 0.76 0.97 1.00 1000 7.2e+00 1.0e+00 | |
stepsize__ 0.16 1.2e-16 8.3e-17 0.16 0.16 0.16 0.50 3.6e-03 1.0e+00 | |
treedepth__ 4.0 3.5e-02 1.0e+00 2.0 4.0 5.0 893 6.4e+00 1.0e+00 | |
n_leapfrog__ 17 3.6e-01 1.1e+01 3.0 15 31 907 6.5e+00 1.0e+00 | |
divergent__ 0.00 0.0e+00 0.0e+00 0.00 0.00 0.00 1000 7.2e+00 nan | |
energy__ 94 1.6e-01 2.5e+00 90 94 98 240 1.7e+00 1.0e+00 | |
sigma[1] 5.6 5.9e-02 1.1e+00 4.1 5.4 7.5 331 2.4e+00 1.0e+00 | |
sigma[2] 3.8 3.7e-02 7.0e-01 2.9 3.7 5.1 357 2.6e+00 1.0e+00 | |
param[1] 0.55 2.0e-03 2.6e-02 0.51 0.55 0.59 167 1.2e+00 1.0e+00 | |
param[2] 0.028 1.2e-04 1.6e-03 0.026 0.028 0.031 181 1.3e+00 1.0e+00 | |
param[3] 0.84 3.2e-03 4.1e-02 0.77 0.84 0.90 163 1.2e+00 1.0e+00 | |
param[4] 0.027 8.7e-05 1.4e-03 0.025 0.027 0.029 248 1.8e+00 1.0e+00 | |
Samples were drawn using hmc with nuts. | |
For each parameter, N_Eff is a crude measure of effective sample size, | |
and R_hat is the potential scale reduction factor on split chains (at | |
convergence, R_hat=1). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
functions { | |
real[] lv(real t, | |
real[] y, | |
real[] param, | |
real[] x_r, | |
int[] x_i) { | |
real dydt[2]; | |
dydt[1] = param[1] * y[1] - param[2] * y[1] * y[2]; | |
dydt[2] = -param[3] * y[2] + param[4] * y[1] * y[2]; | |
return dydt; | |
} | |
} | |
data { | |
int<lower=1> T; | |
real t0; | |
real ts[T]; | |
real y0[2]; | |
real y[T,2]; | |
} | |
transformed data { | |
real x_r[0]; | |
int x_i[0]; | |
real rel_tol; | |
real abs_tol; | |
real max_num_steps; | |
rel_tol = 1e-10; | |
abs_tol = 1e-10; | |
max_num_steps = 1e8; | |
} | |
parameters { | |
vector<lower=0>[2] sigma; | |
real<lower=0> param[4]; | |
} | |
model { | |
real y_hat[T,2]; | |
sigma ~ cauchy(0, 2.5); | |
// Note that the priors are *very* informed | |
param[1] ~ normal(0.4,1e-1); | |
param[2] ~ normal(0.018,1e-2); | |
param[3] ~ normal(0.8,1e-1); | |
param[4] ~ normal(0.023,1e-2); | |
y_hat = integrate_ode_cvode(lv, y0, t0, ts, param, x_r, x_i, rel_tol, abs_tol, max_num_steps); | |
for (t in 1:T){ | |
y[t] ~ normal(y_hat[t], sigma); | |
} | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment