Skip to content

Instantly share code, notes, and snippets.

View farach's full-sized avatar
🏠
Working from home

Alex Farach farach

🏠
Working from home
View GitHub Profile
@farach
farach / structured-ai-productivity-lit-scan-post2025.json
Last active May 14, 2025 21:48
A structured JSON prompt enhanced with chain-of-thought reasoning and markdown output formatting to retrieve and summarize recent literature on AI’s impact on productivity. The model is guided to think through its process step-by-step and then deliver the summaries in clear, human-readable markdown (headings, bullet points, etc.).
{
"instructions": "You are performing a reproducible, controlled literature scan. Do not speculate. Follow these steps strictly and explain each one before proceeding:\n\n1. **Sources**: Query *only* from these two sources: `https://nber.org` and `https://arxiv.org`. Do not include any other source. Clearly state which source produced each result.\n\n2. **Time Filter**: Only include papers published after **January 1, 2025**. If none are available from a given source, explicitly say so and retrieve the **most recent** paper *after January 1, 2024*, clearly labeling it as an exception.\n\n3. **Topic Filter**: Use this exact query: `\"AI\" AND \"Labor Productivity\"`. Search titles, abstracts, and keywords only. Do not substitute synonyms or related concepts.\n\n4. **Ranking and Selection Criteria**:\n - Prefer empirical or theoretical papers with clearly stated methods over speculative or opinion-based content.\n - Select papers that provide either quantitative findings or methodological contributions di
@farach
farach / replicate_m365_copilot_event_study.R
Created May 6, 2025 13:34
This script reproduces a stylized version of the treatment effects from the paper “Early Impacts of M365 Copilot” using simulated data. It aligns closely with the original methodology—implementing fixed effects for worker, time, and firm-by-month, and applying Newey-West standard errors. The code calibrates intent-to-treat (ITT) effects to match…
library(tidyverse)
library(fixest)
library(ggtext)
set.seed(123)
n_workers <- 6000
n_firms <- 56
rel_months <- -1:6 # –1 = pre-rollout month, 1…6 = months after rollout
# –– 2) Calibrate “true” effects to paper’s ITT estimates -------------------
library(tidyverse)
library(fredr)
library(lubridate)
library(scales)
# Define dynamic date filter (past 5 years)
filter_date <- Sys.Date() - years(5)
# Function to safely fetch FRED data with error handling
safe_fredr <- function(series_id) {
@farach
farach / ai_firm_config_simulation.R
Created January 4, 2025 00:27
This R script uses gganimate to simulate and visualize how firms adapt their configurations—single-layer or two-layer, human or AI—as AI knowledge levels ( 𝑧 𝐴 𝐼 z AI ​ ) increase. Inspired by the Artificial Intelligence in the Knowledge Economy model, it calculates profits for each setup and dynamically displays the best configurations over tim…
library(tidyverse)
library(gganimate)
# Parameters
h <- 0.02 # Time cost per worker for the solver
r <- 2 # Rental rate for one unit of compute
worker_grid <- seq(0.2, 0.8, by = 0.1) # Range of human worker knowledge
solver_grid <- seq(0.3, 1.0, by = 0.1) # Range of human solver knowledge
zAI_values <- seq(0.25, 0.95, by = 0.05) # Range of AI knowledge levels
# This R script generates synthetic data representing job functions in various languages, translates them to English using a local language model, and detects the original language. The output includes the original job function, the translated job function, and the detected language, and is saved to a CSV file for further use.
# Load necessary libraries
library(httr)
library(jsonlite)
library(textcat)
library(tidyverse)
library(glue)
library(here)
# Load necessary libraries
library(janitor)
library(httr)
library(jsonlite)
library(tidyverse)
library(furrr)
library(stringr)
library(glue)
# Setup parallel processing
# Instructions:
# - Download LM Studio
# - Download Phi-3 Model (within LM Studio)
# - Load the model into LM Studio
# - Start the Local Server (instructions here: https://lmstudio.ai/docs/local-server)
# Load necessary libraries
library(httr)
library(jsonlite)
library(tidyverse)
import pandas as pd
def pep_talk ():
pep_csv = pd.read_csv(
"https://raw.githubusercontent.com/farach/pep/main/pep_talk.csv",
encoding = 'unicode_escape'
)
pepText = list(
map(
pep_talk <- function() {
read.csv("https://raw.githubusercontent.com/farach/pep/main/pep_talk.csv") |>
purrr::map_chr(~ sample(.x, 1)) |>
glue::glue_collapse(sep = " ")
}
library(fredr)
library(tidyverse)
library(geofacet)
library(ggrepel)
library(ggtext)
set.seed(42) # For reproducibility
# Prepare state series IDs
state_ids <- c(