Skip to content

Instantly share code, notes, and snippets.

Build Sprint 3: Machine Learning Model - Complete Implementation Guide

🎯 What You're Building

You're creating a Machine Learning interface class that predicts monster rarity based on attributes like Level, Health, Energy, and Sanity. This class will train a model, make predictions, save/load the model, and integrate with your API.

Before you start, make sure you have:

  • ✅ Completed Build Sprint 1 and 2
  • ✅ Your local environment set up
  • ✅ Monster data from earlier sprints

Here’s the ULTIMATE beginner version — broken into tiny 5–10 line chunks, with W3Schools links for every single concept you might not know yet.

You can literally copy-paste one chunk at a time and see it work step by step.

Phase 1: Create the HTML (index.html)

Chunk 1/10 – Basic page setup

<!DOCTYPE html>

Nice — fun project. I’ll plan this so the Pi Zero measures whether parts inside the washing machine are working (pump, motor, heater, valves, door switch) without asking you to directly wire high-voltage mains into the Pi. Safety first: do not connect Raspberry Pi GPIO directly to mains or to non-isolated circuits. For anything that touches mains, use galvanic isolation (opto-isolators, isolated sensors, or external AC→DC isolated modules), or test components removed from the appliance on a bench with safe low-voltage signals.

Below I give:

  • A high-level plan (hardware + software)
  • BOM (parts & why)
  • Schematics and wiring diagrams (Mermaid block & GPIO maps)
  • Example circuits for safe low-voltage continuity/voltage-detection, LEDs, and an ADC
  • Example Python code for the Pi
  • Safety checklist + testing procedure

Best Single GPU to Beat the 6x Mac Mini M4 Pro Cluster

To "beat" your 6-node Mac Mini M4 Pro cluster (384GB total unified memory, ~4 tokens/second for 4-bit quantized DeepSeek-V3 671B inference, based on scaling from 8-node benchmarks), we're targeting a single GPU that delivers higher inference speed (e.g., >4 t/s for the full 671B model) while keeping costs reasonable for a local setup. DeepSeek-V3's MoE architecture (only 37B active params/token) helps, but the model's ~386GB 4-bit footprint means no consumer GPU can load it fully in VRAM alone—you'll need CPU offloading (e.g., via llama.cpp or vLLM with 128GB+ system RAM). This hybrid approach is common and works well for interactive use.

The NVIDIA RTX 4090 (24GB VRAM) is the clear winner as the best single GPU alternative. It outperforms the cluster in raw speed for DeepSeek-V3 (5-15 t/s with optimizations, vs. your 4 t/s), costs far less ($1,600 vs. $12-15K for the cluster), and draws less power (~450W peak vs. ~300-400W total for 6 Min

Cloudflare Tunnel → EC2 (Amazon Linux) — Step‑by‑Step

Goal: expose two hostnames via Cloudflare Tunnel that route to services running on a single EC2 instance:

  • <SUBDOMAIN 1>.<YOUR DOMAIN NAME>.comhttp://localhost:80 (Python website)
  • <SUBDOMAIN 2>.<YOUR DOMAIN NAME>.comhttp://localhost:3000 (Express API)

This guide is concise and written for Amazon Linux (YUM/RPM). Commands assume you run them as ec2-user with sudo where necessary.


Method: Application Load Balancer + AWS Certificate Manager (ACM) with the default EC2 DNS name

AWS lets you issue a real ACM certificate for the exact string *.compute-1.amazonaws.com and for your specific instance DNS name. It’s public and trusted by all browsers.

Step-by-step

  1. Open ACM in us-east-1 (N. Virginia) — this is mandatory
    https://us-east-1.console.aws.amazon.com/acm/home

  2. Request a public certificate → click “Request”

Roman to Integer Walkthrough - Step-by-Step Mermaid Visuals


Problem

Convert a Roman numeral string into an integer.

Roman numerals: