Skip to content

Instantly share code, notes, and snippets.

View arthurcolle's full-sized avatar
🎯
Focusing

Arthur M. Collé arthurcolle

🎯
Focusing
View GitHub Profile
@arthurcolle
arthurcolle / claudius.py
Created June 28, 2025 01:10
claudius 2024
from anthropic import Anthropic
import readline
from datetime import datetime as dt
current_sim_depth = 0
def set_sim_depth(depth):
global current_sim_depth
current_sim_depth = depth

Who Was Paracelsus?

Full Name: Philippus Aureolus Theophrastus Bombastus von Hohenheim Lifespan: c. 1493 – 1541 Known As: Paracelsus Professions: Physician, alchemist, astrologer, lay theologian, philosopher Era: German Renaissance

Early Life and Education

MLX Erlang: A Fault-Tolerant Distributed Machine Learning Framework for Apple Silicon Clusters

Arthur Colle, International Distributed Systems Corporation (IDSC)

Prologue: The Great Convergence - When Worlds Collide

Stanford University, 2:47 AM, December 12th, 2024

Dr. Sarah Chen's MacBook Pro didn't just crash—it surrendered. The M2 Max chip, pushed beyond all reasonable limits, had been training her revolutionary protein folding model for 147 hours straight. The blue screen of thermal death flickered once, then darkness. Six days of computation, 2.4 billion gradient updates, the equivalent of $200,000 in cloud compute credits—all lost to the ether of unreliable consumer hardware masquerading as scientific infrastructure.

@arthurcolle
arthurcolle / The $200 Billion Infrastructure Crisis That's About to Get Much Worse.md
Created May 31, 2025 00:13
# MLX Erlang: A Fault-Tolerant Distributed Machine Learning Framework for Apple Silicon Clusters

MLX Erlang: A Fault-Tolerant Distributed Machine Learning Framework for Apple Silicon Clusters

The $200 Billion Infrastructure Crisis That's About to Get Much Worse

Executive Summary: The Great AI Awakening (And Why It's Financially Unsustainable)

December 15th, 2024 - San Francisco

The AI revolution isn't failing because the models aren't smart enough. It's failing because the infrastructure is financially unsustainable, operationally fragile, and architecturally doomed.

@arthurcolle
arthurcolle / privacy policy
Created April 29, 2025 02:21
privacy policy
privacy policy is that every data is private
@arthurcolle
arthurcolle / main.py
Created April 22, 2025 20:20
GIC backend main.py
import httpx
import os
import redis
from fastapi import FastAPI, Request
from pydantic import BaseModel, BaseConfig
import httpx
import asyncio
API = os.getenv("FAKEMAIL", "https://1fef-216-158-152-64.ngrok-free.app")
WEBHOOK_URL = os.getenv("WEBHOOK_PUBLIC_URL", "https://260d-2600-4040-4930-d500-ad7a-1287-fa48-5d5d.ngrok-free.app")
@arthurcolle
arthurcolle / lets_build_gpt2.txt
Created February 20, 2025 03:36
let's build gpt2
[00:00:00.000 --> 00:00:04.320] Hi everyone. So today we are going to be continuing our Zero2Hero series
[00:00:04.320 --> 00:00:10.640] and in particular today we are going to reproduce the GPT2 model, the 124 million version of it.
[00:00:10.640 --> 00:00:17.440] So when OpenAI released GPT2, this was 2019 and they released it with this blog post.
[00:00:17.440 --> 00:00:23.040] On top of that they released this paper and on top of that they released this code on GitHub,
[00:00:23.040 --> 00:00:29.600] so OpenAI/GPT2. Now when we talk about reproducing GPT2, we have to be careful because in particular
[00:00:29.600 --> 00:00:34.880] in this video we're going to be reproducing the 124 million parameter model. So the thing to
[00:00:34.880 --> 00:00:41.040] realize is that there's always a miniseries when these releases are made, so there are the GPT2
[00:00:41.040 --> 00:00:46.800] miniseries made up of models at different sizes and usually the biggest model is called the GPT2.
[00:00:46.800
import openai
import os
import sys
import inspect
import ast
import difflib
# Initialize OpenAI client
openai.api_type = 'openai'
openai.api_key = os.getenv("OPENAI_API_KEY")

meta_dsl_with_oorl.md Object-Oriented Reinforcement Learning in Mutable Ontologies with Self-Reflective Meta-DSL

Arthur M. Collé

  1. Introduction

1.1. Motivation and Objectives

Reinforcement learning has made significant strides in enabling agents to learn complex behaviors through interaction with their environment. However, traditional approaches often struggle in open-ended, dynamic environments where the optimal behavior and relevant features may change over time.

Keybase proof

I hereby claim:

  • I am arthurcolle on github.
  • I am arthurcolle (https://keybase.io/arthurcolle) on keybase.
  • I have a public key whose fingerprint is 3F3E 4B04 E173 8BDD 299E 0B15 18C9 D0D0 9094 2A26

To claim this, I am signing this object: