Skip to content

Instantly share code, notes, and snippets.

View apoorvalal's full-sized avatar

Apoorva Lal apoorvalal

View GitHub Profile

automated reclaiming RAM from macos' pesky background services

Edit ramcleaner.sh to kill the processes that tend to balloon RAM usage. medianalysisd is the main culprit; it often gobbles up 3-4GB RAM in random spikes, which is prohitive on the base mac mini 16GB that I run as a home-server.

This bash script searches for the service by prgrepping active processes and kills what it finds. When OS services are observationally equivalent to viruses, they ought to be treated accordingly.

Schedule it by running crontab -e and entering the following, which runs the cleanup script every 2 minutes and logs its actions to ~/.ramcleaner.log.

*/2 * * * * /bin/bash ramcleaner.sh >/dev/null 2>&1
@apoorvalal
apoorvalal / _ollama_runner.py
Last active April 14, 2025 00:54
Run local llms via CLI. First install ollama, pull the model you want, and then run `./ollama_runner.py <prompt> <modelname> <outputfile>`; latter two are optional [defaults are gemma3 and empty.
#!/Users/alal/miniforge3/envs/llm/bin/python
# this needs to point to a python virtualenv with ollama-python
# your system should also have ollama installed
# if you have an nvidia gpu, install lshw first before installing ollama
import argparse
import ollama
def main(msg, mod='gemma3:27b', outfile=None):
response = ollama.chat(model=mod, messages=[
#!/bin/bash
# Directory to search
DIRECTORY=$1
# Additional extensions to search, if provided
EXTENSIONS="tex"
if [ ! -z "$2" ]; then
EXTENSIONS="$EXTENSIONS|$2"
# %%
import functools
from typing import Callable, TypeVar, Any
import pandas as pd
import numpy as np
T = TypeVar("T")
# %%
@apoorvalal
apoorvalal / linRegressionInference.R
Last active June 3, 2024 05:03
Inference for the Population Average Treatment Effect using fully interacted OLS
library(momentfit); library(car); library(tictoc)
set.seed(42)
# %%
dgp = \(n=500, k = 2){
X = matrix(rnorm(n * 2), n, 2)
Y1 = X[, 1] + X[, 1]^2 + runif(n, -0.5, 0.5)
Y0 = X[, 2] + X[, 2]^2 + runif(n, -1, 1)
Z = rbinom(n, 1, 0.6)
Y = Z * Y1 + (1-Z) * Y0
data.frame(Y, Z, X)
from joblib import Parallel, delayed
import numpy as np
import pandas as pd
class LinearMediation:
def __init__(self):
pass
def fit(self, X, W, y, store=True):
@apoorvalal
apoorvalal / texJanus.tex
Last active January 20, 2024 02:25
generate both an article and slide-deck from the same tex file using beamerswitch.
\documentclass[%
article,
% beamer,
beameroptions={ignorenonframetext,14pt},
articleoptions={a4paper,12pt},
also={trans,handout,article}
]{beamerswitch}
\handoutlayout{nup=3plus,border=1pt}
\articlelayout{maketitle,frametitles=none}
\mode<article>{\usepackage[hmargin=2cm,vmargin=2cm]{geometry}}
@apoorvalal
apoorvalal / ml_powered_covariate_adjustment.py
Created December 20, 2023 01:45
covariate adjustment using nonparametric regression (Wager et al 2016 PNAS)
import numpy as np
import pandas as pd
from scipy.stats import norm
from sklearn.model_selection import cross_val_predict, KFold
# learners
from xgboost import XGBRegressor
from glum import GeneralizedLinearRegressorCV
from sklearn.kernel_ridge import KernelRidge
@apoorvalal
apoorvalal / fwl_estimates_and_se.R
Created December 8, 2023 20:16
numerical verification of equality of coefficient and SE per FWL
library(estimatr)
data(auto)
# %% FWL regression coefficient
auto$ytil = lm(price ~ displacement, auto)$resid
auto$x2til = lm(weight ~ displacement, auto)$resid
(fwlest = lm_robust(ytil ~ x2til, auto, se_type = "HC0")
%>% summary %>% .$coefficients %>% .[2, 1:2])
# %%
(fullest =
lm_robust(price ~ weight + displacement, auto, se_type = "HC0") %>%
@apoorvalal
apoorvalal / ols_lean.py
Created November 5, 2023 17:21
lean implementation of OLS
import numpy as np
from scipy.linalg import lstsq
np.random.seed(42)
# %%
def ols(X, y, vcov = 'HC1', driver = 'gelsy'):
"""
Fast, minimal implementation of least squares regression with robust SEs
Args:
X: n X p array of covariates