Create various settings file I have one file for each provider, all in ~/.claude
- KIMI K2.5: kimi_settings.json
{
"env": {
"ANTHROPIC_BASE_URL": "https://api.moonshot.ai/anthropic",
Create a single-page website for "PHANTOM PROTOCOL" — a fictional upcoming AAA tactical shooter video game set in a near-future cyberpunk world where elite hackers and mercenaries fight for control of a fractured megacity. Requirements:
Bold, immersive, cinematic design that feels like a AAA game landing page A full-screen hero section with a dramatic background image/video and the game logo/title A "Story" or "The World" section with atmospheric imagery and a brief narrative hook A "Characters" or "Operatives" section showcasing 3-4 playable characters with images and short bios A "Features" section highlighting 3 key gameplay elements (e.g., tactical combat, online multiplayer, dynamic environments) A "Pre-order" or "Wishlist" call-to-action section A footer with platform icons (PC, PlayStation, Xbox), social links, and age rating
Create a production-ready, visually stunning website with a futuristic luxury travel theme.
GOAL Build a single-page (plus optional “Destination” detail route) website for a fictional brand: “AURORA LUXE TRAVEL” — ultra-premium, concierge-level trips.
TECH STACK (use exactly this unless something breaks)
Here the full prompt used:
Create a stunning, modern sports-car showcase website as a SINGLE self-contained HTML file (one page) with embedded CSS and JavaScript. Output ONLY the final HTML.
HARD REQUIREMENTS
| Benchmark started on 2025-10-19 22:52:10 | |
| ** Command line: | |
| /Users/ifioravanti/github/consumer-tflop-database/.venv/bin/python mamf-finder.py --m_range 0 16384 1024 --n_range 0 16384 1024 --k_range 0 16384 1024 --dtype bfloat16 --output_file=2025-10-19-22:52:09.txt | |
| ** Dtype: torch.bfloat16 | |
| ** Platform/Device info: | |
| - Darwin MacStudioIvan 25.1.0 Darwin Kernel Version 25.1.0: Sun Oct 5 21:09:25 PDT 2025; root:xnu-12377.40.120~10/RELEASE_ARM64_T6031 arm64 arm |
The command for evaluating on MMLU Pro:
mlx_lm.evaluate --model model/repo --task mmlu_pro
The command for efficiency benchmarks:
| """Run four batched generations with varying sampling settings.""" | |
| import argparse | |
| import mlx.core as mx | |
| from mlx_lm import batch_generate, load | |
| from mlx_lm.sample_utils import make_sampler |
| #!/usr/bin/env python3 | |
| """ | |
| MLX benchmark script that replicates llama-bench behavior exactly. | |
| Uses random tokens for both prompt and generation, no sampling. | |
| """ | |
| import mlx.core as mx | |
| import mlx_lm | |
| from mlx_lm.models.cache import make_prompt_cache | |
| import time |
| import pygame | |
| import math | |
| import random | |
| # Initialize pygame | |
| pygame.init() | |
| # Screen dimensions | |
| WIDTH, HEIGHT = 800, 800 | |
| screen = pygame.display.set_mode((WIDTH, HEIGHT)) |
| <!DOCTYPE html> | |
| <html lang="en"> | |
| <head> | |
| <meta charset="UTF-8"> | |
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> | |
| <title>P5.js Particle Animation</title> | |
| <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.min.js"></script> | |
| <style> | |
| body { | |
| margin: 0; |