Skip to content

Instantly share code, notes, and snippets.

View thomvolker's full-sized avatar

Thom Volker thomvolker

View GitHub Profile

A slightly modified estimice() and .norm.draw()

Thom Benjamin Volker

04-03-2025

An attempt at a faster estimice() function for the R-package mice.

devtools::load_all(path = "C:/Users/5868777/surfdrive/Documents/mice")

TabPFN in R with reticulate

Generate some example data.

X1 <- runif(200, 0, 10)
X2 <- sin(X1) + rnorm(200, 0, 0.5)
Y <- 3 + 0.5 * X1 + X2 + rnorm(200, 0, 1)

Prediction intervals with missing data

authors: Florian van Leeuwen, Thom Benjamin Volker, Gerko Vink and Stef van Buuren

The development and application of (clinical) prediction models is complicated by missing data, as most analysis techniques do not readily allow for incorporating missing values. Consequently, model parameters cannot be estimated, and predictions cannot be calculated. Ad-hoc fixes to deal with missing data, such as listwise deletion or mean imputation, work only under limited circumstances, such as MCAR, which are unlikely to hold in practice. A more principled approach dealing with missing data is multiple imputation (MI). Many studies confirmed that MI allows one to obtain unbiased and efficient estimates of model parameters under fairly general conditions. Practitioners in (clinical) prediction commonly conceive single imputation to be sufficient. The present study compares single versus multiple imputation for making point estimates (predictions) and prediction interval

@thomvolker
thomvolker / using-google-colab-with-r-and-tensorflow.ipynb
Created December 17, 2024 11:00
Using Google Colab with R and tensorflow.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

Can we get away with single imputation when the goal is to obtain well-calibrated prediction intervals

Of course not! 😁

library(foreach)

set.seed(123)

nsim <- 200
@thomvolker
thomvolker / dr-transformations.md
Last active September 27, 2024 10:50
Density ratio estimation is invariant to one-to-one and onto transformations

Density ratio estimation is invariant to one-to-one and onto transformations

Thom Benjamin Volker

At least in the univariate case, the multivariate case must be considered in a future gist still.

N <- 10000000
x1 <- rexp(N, 1)

The following code shows how to obtain density ratio estimates that are regularized to $1$ instead of $0$ (or an estimated intercept).

pred_adapt <- function(nu, de, ce, sigma, lambda) {
  Knu <- densityratio::distance(as.matrix(nu), as.matrix(ce), TRUE) |> kernel_gaussian(sigma)
  Kde <- densityratio::distance(as.matrix(de), as.matrix(ce), TRUE) |> kernel_gaussian(sigma)

  Kdede <- crossprod(Kde) / nrow(Kde)
  Knunu <- colMeans(Knu)
 alpha &lt;- solve(Kdede[-1, -1] + lambda * diag(ncol(Kde)-1), Knunu[-1] - Kdede[1, -1])

This code shows that divergence-based two-sample tests implemented in the densityratio package yield nominal Type-I error rates. TODO: run file and create output.


library(tibble)
library(purrr)
library(furrr)
library(densityratio)

mice.impute.pmm() can give counterintuitive results when using type 1 matching, see for example the following scenario.

The following code is provided by Stef van Buuren, inspired by Templ 2023, Visualization and Imputation of Missing Values.

library(mice)
#> 
#> Attaching package: 'mice'
#> The following object is masked from 'package:stats':
#> 
#>     filter
@thomvolker
thomvolker / mvn-probs.md
Last active May 3, 2024 12:22
Finding ellipsoidal probabilities under the multivariate normal model

Finding ellipsoidal probabilities under the multivariate normal model

Thom Benjamin Volker

Introduction

Many statistical models impose multivariate normality, for example, for example on the regression parameters in a linear regression model. A question that may arise is how to find the probability that a random