library(data.table)
q <- c(0.001,
0.002,
0.003,
0.003,
0.004,
0.004,
0.005,
0.007,
0.009,
0.011)
w <- c(0.05,
0.07,
0.08,
0.10,
0.14,
0.20,
0.20,
0.20,
0.10,
0.04)
P <- 100
S <- 25000
r <- 0.02
dt <- as.data.table(cbind(q,w))
npv <- function(cf, r, S, P) {
cf[, inforce := shift(cumprod(1 - q - w), fill = 1)
][, lapses := inforce * w
][, deaths := inforce * q
][, claims := deaths * S
][, premiums := inforce * P
][, ncf := premiums - claims
][, d := (1/(1+r))^(.I)
][, sum(ncf*d)]
}
npv(dt,r,S,P)
#> [1] 50.32483
microbenchmark::microbenchmark(npv(dt,r,S,P))
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> npv(dt, r, S, P) 2.5791 2.71035 2.964293 2.85625 3.10385 6.0357 100
Created on 2021-01-15 by the reprex package (v0.3.0)
I've been running them in a Ubuntu VM on Windows 10 (using WSL) - the 10 manual runs I did for nim indicated that there was a spike every so often (e.g. some background operation running). The mean is skewed by that high 5989 maximum - it is contributing ~5989/100 = 60 to the mean, which would be around 8 microsecs otherwise.
If you are doing homogeneous operations then the median is probably a more useful indicator purely for benchmarking - if it was at scale and you wanted to test a variety of policies then agree mean (or a truncated mean) would be better to indicate your production run time.
I had a look at the Julia Benchmarks - they roughly tie up to your table. LuaJIT (which I've never heard of) gives Julia a good run for its money on the JIT front. Downside on these is that its probably harder to interface them into Python if you are building a production pipeline.