library(data.table)
q <- c(0.001,
0.002,
0.003,
0.003,
0.004,
0.004,
0.005,
0.007,
0.009,
0.011)
w <- c(0.05,
0.07,
0.08,
0.10,
0.14,
0.20,
0.20,
0.20,
0.10,
0.04)
P <- 100
S <- 25000
r <- 0.02
dt <- as.data.table(cbind(q,w))
npv <- function(cf, r, S, P) {
cf[, inforce := shift(cumprod(1 - q - w), fill = 1)
][, lapses := inforce * w
][, deaths := inforce * q
][, claims := deaths * S
][, premiums := inforce * P
][, ncf := premiums - claims
][, d := (1/(1+r))^(.I)
][, sum(ncf*d)]
}
npv(dt,r,S,P)
#> [1] 50.32483
microbenchmark::microbenchmark(npv(dt,r,S,P))
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> npv(dt, r, S, P) 2.5791 2.71035 2.964293 2.85625 3.10385 6.0357 100Created on 2021-01-15 by the reprex package (v0.3.0)
StaticArrays help when the size of the array is less than about 15 items, IIRC. Since the sample here is unrealistically short (e.g.
qis often much longer). So it would probably help here, but would be misleading for more "real world" problem sizes.Here's a small tweak to the code, which avoids the more expensive concatenation (
[arr1;arr2]) and modifies the same array that is generated with thecumprod:which is close to ~8x speedup:
And inserting an
@viewwhere thecumprodarray is sliced makes the operation non-allocating and eeks out a touch more performance: