| name | replicate | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| description | Search, explore, and run ML models on Replicate (image gen, video, audio, text, etc.) | |||||||||||||
| homepage | https://replicate.com | |||||||||||||
| metadata |
|
Run state-of-the-art open-source and proprietary ML models via the Replicate cloud API.
| Here's the plan to build the local cog binary on this machine: | |
| # Plan | |
| The prerequisites are mostly in place (mise, Go 1.26, Docker). Here are the steps: | |
| # Step 1: Install mise-managed tools | |
| mise install |
| name | replicate | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| description | Search, explore, and run ML models on Replicate (image gen, video, audio, text, etc.) | |||||||||||||
| homepage | https://replicate.com | |||||||||||||
| metadata |
|
Run state-of-the-art open-source and proprietary ML models via the Replicate cloud API.
| name | openclaw-introspect |
|---|---|
| description | Explore, understand, and reconfigure your own OpenClaw gateway, agent harness, and system prompt. Use when you need to inspect or change OpenClaw configuration (openclaw.json), understand how the system prompt is built, debug session/channel/model issues, navigate the docs or source code, or tune agent defaults (models, thinking, sandbox, tools, heartbeat, compaction, channels, skills, plugins, cron, hooks). Also use for questions about OpenClaw architecture, the agent loop, context window, or how any OpenClaw feature works internally. |
Explore and reconfigure your own harness. This skill gives you structured knowledge about the OpenClaw internals so you can inspect, debug, and tune the running gateway.
| #!/bin/bash | |
| # Update | |
| sudo apt-get update | |
| # Install tools | |
| sudo apt install nvidia-cuda-toolkit | |
| # Install cog | |
| sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)" | |
| sudo chmod +x /usr/local/bin/cog |
| palette = 0=#23272e | |
| palette = 1=#f38020 | |
| palette = 2=#a8e6a3 | |
| palette = 3=#faae40 | |
| palette = 4=#4da6ff | |
| palette = 5=#ff80ab | |
| palette = 6=#66d9ef | |
| palette = 7=#c0c5ce | |
| palette = 8=#4f5b66 | |
| palette = 9=#f38020 |
| # Replace /etc/docker/daemon.json docker config in Brev.dev Crusoe GPUs | |
| { | |
| "default-runtime": "nvidia", | |
| "mtu": 1500, | |
| "runtimes": { | |
| "nvidia": { | |
| "args": [], | |
| "path": "nvidia-container-runtime" | |
| } | |
| }, |
| #!/bin/bash | |
| # This is local cli command that allows users to use kokoro on a Macbook Pro | |
| # Requires you to first run the kokoro docker container: | |
| docker run -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-cpu:latest | |
| # Then save this file to /usr/local/bin | |
| # Finally you can test: | |
| kokoro "The quick brown fox jumped over the lazy dog" | |
| # Or even pipe from a stream like: | |
| llm "tell me a joke" | kokoro |
| # Setup: | |
| # conda create -n wan python=3.10 | |
| # conda activate wan | |
| # pip3 install torch torchvision torchaudio | |
| # pip install git+https://github.com/huggingface/diffusers.git@3ee899fa0c0a443db371848a87582b2e2295852d | |
| # pip install accelerate==1.4.0 | |
| # pip install transformers==4.49.0 | |
| # pip install ftfy==6.3.1 | |
| services: | |
| pihole-unbound: | |
| image: 'bigbeartechworld/big-bear-pihole-unbound:2024.07.0' | |
| environment: | |
| - SERVICE_FQDN_PIHOLE_8080 | |
| - SERVICE_FQDN_PIHOLE_10443 | |
| - 'DNS1=127.0.0.1#5353' | |
| - DNS2=no | |
| - TZ=America/Chicago | |
| - WEBPASSWORD=$SERVICE_PASSWORD_PIHOLE |
| from optimum.quanto import freeze, qfloat8, quantize | |
| from diffusers import FluxPipeline | |
| import torch | |
| import time | |
| seed=1337 | |
| generator = torch.Generator("cuda").manual_seed(seed) | |
| pipeline = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cuda") |