Skip to content

Instantly share code, notes, and snippets.

View cemoody's full-sized avatar
πŸ‘‹

Christopher Erick Moody cemoody

πŸ‘‹
View GitHub Profile
@cemoody
cemoody / signal-health.html
Last active March 27, 2026 18:19
Signal Collector Health Dashboard
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Signal Collector Health</title>
<link href="https://fonts.googleapis.com/css2?family=STIX+Two+Text:ital,wght@0,400;0,500;0,600;0,700;1,400&family=Source+Code+Pro:wght@400;500&display=swap" rel="stylesheet">
<style>
:root {
--green: #3a7d44;
@cemoody
cemoody / vk-tunnel.sh
Last active March 11, 2026 21:13
Vibe Kanban + CopyParty behind Tailscale Serve with origin-stripping proxy
#!/usr/bin/env bash
#
# vk-tunnel.sh β€” Launch Vibe Kanban + CopyParty via Tailscale Serve
#
# A Node.js proxy handles all routing and origin-stripping:
# 1. /files/* and /.cpr/* β†’ CopyParty
# 2. Everything else β†’ Vibe Kanban (with Origin header stripped)
#
# Tailscale Serve (:443)
# └── /* β†’ Node proxy (:42818)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@cemoody
cemoody / test_count_badge.json
Created July 11, 2024 18:36
indexer counts
{}
"""Wrapper around BigQuery call."""
from __future__ import annotations
from typing import Any, Iterable
import logging
from google.cloud import bigquery_storage
from google.cloud.bigquery_storage_v1 import exceptions as bqstorage_exceptions
from google.cloud.bigquery_storage_v1 import types, writer
from google.protobuf import descriptor_pb2
from google.protobuf.descriptor import Descriptor
from google.cloud.bigquery_storage import BigQueryReadClient
from google.cloud.bigquery_storage import types
from google.cloud import bigquery_storage
from tqdm import tqdm
import pandas
import os
import dill
project_id = (
import os
import io
import json
import math
import time
import random
import numpy as np
import cachetools.func
import sqlite3
from loguru import logger
βœ“ Initialized. View app at https://modal.com/apps/ap-lHATR9JHJ7S5eGXYHigc75
βœ“ Created objects.
β”œβ”€β”€ πŸ”¨ Created sample_fn.
β”œβ”€β”€ πŸ”¨ Mounted /Users/chris/code/search/gumbase/jobs/partition.py at /root
β”œβ”€β”€ πŸ”¨ Mounted /Users/chris/code/gumhouse/gumhouse at /root/gumhouse
β”œβ”€β”€ πŸ”¨ Created sample_job.
β”œβ”€β”€ πŸ”¨ Created fit_top_level_kmeans.
β”œβ”€β”€ πŸ”¨ Created scatter_by_centroids_single.
β”œβ”€β”€ πŸ”¨ Created scatter_by_centroids.
β”œβ”€β”€ πŸ”¨ Created gather_single.
apiVersion: v1
kind: Service
metadata:
name: vespa
labels:
app: vespa
spec:
selector:
app: vespa
type: NodePort
@cemoody
cemoody / file_data_loader.py
Created December 16, 2022 21:00
A multiprocess Parquet DataLoader for PyTorch. Great for loading large sequential access datasets. Easy to install, modify, and use.
import multiprocessing
import queue
from loguru import logger
import pandas as pd
def chunks(df, chunk_size=1000):
for i in range(0, len(df), chunk_size):
yield df[i : i + chunk_size]