Skip to content

Instantly share code, notes, and snippets.

@kwindla
kwindla / bot.py
Created March 11, 2025 19:45
Cartesia Sonic-2 Language Teacher
import asyncio
import os
import sys
from dataclasses import dataclass
import aiohttp
import google.ai.generativelanguage as glm
from dotenv import load_dotenv
from loguru import logger
from runner import configure
@sayakpaul
sayakpaul / grade_images_with_gemini.py
Last active March 19, 2025 15:58
Shows how to use Gemini Flash 2.0 to grade images on multiple aspects like accuracy to prompt, emotional and thematic response, etc.
from google import genai
from google.genai import types
import typing_extensions as typing
from PIL import Image
import requests
import io
import json
import os
@k3ntar0
k3ntar0 / index.html
Last active July 18, 2024 04:24
Claude 3.5 Sonnetで生成した物体検出アプリケーション
<!DOCTYPE html>
<html lang="ja">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Realtime Object Detection</title>
<style>
body { font-family: Arial, sans-serif; }
#canvas { border: 1px solid #000; }
table { border-collapse: collapse; margin-top: 20px; }
@dedlim
dedlim / claude_3.5_sonnet_artifacts.xml
Last active April 24, 2025 19:21
Claude 3.5 Sonnet, Full Artifacts System Prompt
<artifacts_info>
The assistant can create and reference artifacts during conversations. Artifacts are for substantial, self-contained content that users might modify or reuse, displayed in a separate UI window for clarity.
# Good artifacts are...
- Substantial content (>15 lines)
- Content that the user is likely to modify, iterate on, or take ownership of
- Self-contained, complex content that can be understood on its own, without context from the conversation
- Content intended for eventual use outside the conversation (e.g., reports, emails, presentations)
- Content likely to be referenced or reused multiple times
@unixzii
unixzii / ForceEnablingXcodeLLM.md
Last active April 19, 2025 18:12
A guide to force enabling Xcode LLM feature on China-SKU Macs.

Introduction

Apple restricted the access to Xcode LLM (Predictive code completion) feature on China models of Mac. This guide provides a way to bypass that restriction. It's verified on macOS 15.0 Beta (24A5264n), but there is no guarentee that it will always work on later macOS versions.

Prerequisites

  • Xcode is installed and run at least once.
  • SIP debugging restrictions are disabled (via csrutil enable --without debug command in recovery mode).

Disclaimer

@alexweberk
alexweberk / mlx_finetuning_gemma.ipynb
Last active April 28, 2025 00:53
MLX Fine-tuning Google Gemma
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@ivanfioravanti
ivanfioravanti / gist:bcacc48ef68b02e9b7a4034161824287
Created January 25, 2024 00:01
Fine tuning dataset generation wiht Ollama python library
import json
import os
import ollama
def query_ollama(prompt, model='openhermes:7b-mistral-v2.5-q6_K', context=''):
response = ollama.generate(
model=model,
prompt=context + prompt)
return response['response'].strip()
@etrobot
etrobot / oneclicknewsvideo.ipynb
Created November 19, 2023 09:08
OneClickNewsVideo.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@mberman84
mberman84 / main.py
Created November 10, 2023 16:17
OpenChat Code
import requests
import json
import gradio as gr
url = "http://localhost:11434/api/generate"
headers = {
'Content-Type': 'application/json',
}
@JimLiu
JimLiu / ChatGPT-Translate-Long-Text.js
Last active April 8, 2024 07:26
使用ChatGPT自动分页翻译长文
// WARNING:此脚本仅做学习和演示用途,在不了解其用途前不建议使用
// 本脚本的用途是将输入内容分页,每次提取一页内容,编辑第二条消息,发送,然后收集结果
// 使用前,需要有两条消息,参考模板 https://chat.openai.com/share/17195108-30c2-4c62-8d59-980ca645f111
// 演示视频: https://www.bilibili.com/video/BV1tp4y1c7ME/?vd_source=e71f65cbc40a72fce570b20ffcb28b22
//
(function (fullText) {
const wait = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const groupSentences = (fullText, maxCharecters = 2800) => {
const sentences = fullText.split("\n").filter((line) => line.trim().length > 0);