Active learning in one line
Keep a model
Concept | Formal core | Intuition for the next query |
---|---|---|
Decision-centric | “If I were given the label of |
""" | |
This example demonstrates that fitting a weighted linear mixed-effects model using lmer(weights = …) | |
is algebraically equivalent to pre-multiplying the outcome and all predictors (including intercepts) | |
by the square root of the weights and fitting an unweighted model on the transformed data. | |
The test validates that this "pre-weight and refit" strategy yields identical fitted values and | |
is especially useful when packages like clubSandwich prohibit prior weights in robust variance estimation. | |
Includes realistic longitudinal synthetic data with fixed and random effects, dropout, and continuous covariates. | |
""" | |
from typing import List |
Active learning in one line
Keep a model
Concept | Formal core | Intuition for the next query |
---|---|---|
Decision-centric | “If I were given the label of |
import matplotlib.pyplot as plt | |
import networkx as nx | |
from causallearn.utils.KCI.KCI import KCI_CInd, KCI_UInd | |
from npeet_plus import mi, mi_pvalue | |
from tqdm import tqdm | |
from src.causal_discovery.static_causal_discovery import ( | |
run_causal_discovery, | |
visualize_causal_graph, | |
) |
#!/usr/bin/env python3 | |
import fnmatch | |
import os | |
""" | |
This Python script, textrepo, concatenates all files within a specified repository into a single text file | |
while respecting .gitignore patterns and additional specified ignore patterns. It prints the formatted content | |
to both a specified output file and standard output. This is useful for reviewing all content within a repository | |
in a structured format, excluding unwanted files and directories such as node_modules, dist, build, and others. |
from concurrent.futures import ProcessPoolExecutor | |
import numpy as np | |
from scipy.signal import convolve2d | |
from tqdm import tqdm | |
nsims = int(1e6) | |
nevents = int(1e5) | |
tailast() { | |
# Get the directory from the argument or use the current directory as default | |
local dir="${1:-.}" | |
# Find the most recently modified file in the directory without descending into subdirectories | |
local latest_file=$(find "$dir" -maxdepth 1 -type f -exec stat --format='%Y %n' {} \; | sort -n | tail -1 | awk '{print $2}') | |
# Check if a file was found | |
if [[ -z "$latest_file" ]]; then | |
echo "No files found in $dir" |
import math | |
import torch.optim as optim | |
import torch | |
from torch import nn | |
class MineWrapper(nn.Module): | |
def __init__(self, stat_model, moving_average_rate=0.1, unbiased=False): | |
super(MineWrapper, self).__init__() |
class DistributedIdentity: | |
""" | |
Singleton class to hold distributed identity information. | |
Handles SLURM, torchrun, and local runs. | |
Looks for the following environment variables: | |
- RANK | |
- WORLD_SIZE | |
- LOCAL_RANK |
import torch | |
import torch.nn as nn | |
class Sparsemax(nn.Module): | |
def __init__(self, dim=-1): | |
super(Sparsemax, self).__init__() | |
self.dim = dim | |
def forward(self, x): | |
# Move the dimension to apply Sparsemax to the front |
setup=''' | |
import numpy as np | |
import torch | |
V_nat = [[1, 2], [3, 4]] | |
U_nat = [[2, -1, 0, 0, 0, 0], | |
[5, 2, 8, -1, 0, 0]] | |
def compute_using_fleuret_1(): |