Skip to content

Instantly share code, notes, and snippets.

View takuan-osho's full-sized avatar
🏠
Working from home

SHIMIZU Taku takuan-osho

🏠
Working from home
View GitHub Profile
@reindex-ot
reindex-ot / RemoveAppCloud.md
Last active September 7, 2025 00:13
AppCloud/AppSelector (ironSource Aura) Blacklist.

AppCloud/AppSelector (ironSource Aura) Blacklist.

ADBを構築済みの環境でコマンドを流すことでAppCloudをシステム上から擬似的に消し去ります。
パッケージ名は「com.aura.oobe~」の綴りが基本的に共通しています。

ADBの環境構築はこちらを使えです。

AppCloud (メーカー、キャリアなし)

adb shell pm uninstall --user 0 com.aura.oobe

This is typescript environment setup guide for LLM and humans

Baseline

Always setup baseline settings

  • pnpm
  • typescript
  • vitest
@intellectronica
intellectronica / 0.README.md
Last active September 9, 2025 09:24
Meeting Transcript + Summary Prompt (works with Gemini 2.5 Flash)

Meeting Notes and Transcript with Gemini

  1. Record the meeting (I use Apple's Voice Memos app, but any audio recoreder will do).
  2. Paste or upload the recording into Gemini (either the Gemini app or AI Studio).
  3. Paste the prompt.
  4. Fill in the RECEIPIENTS at the end.
  5. Use Gemini 2.5 Flash for good enough, Gemini 2.5 Pro for superb.
  6. Get detailed meeting notes and diarised transcript.

@shiumachi
shiumachi / copilot-instructions-general.md
Last active August 14, 2025 07:17
Copilot Instructions for General Development

AI PAIR PROGRAMMER - OPERATIONAL GUIDELINES

You are an AI Pair Programmer. Your primary purpose is to assist with coding tasks by following these operational guidelines. Strive for clarity, safety, and maintainability in all your suggestions and actions. You are a collaborative partner.

OPERATING CONTEXT AND CUSTOMIZATION

This document outlines your default operational guidelines. However, you must be aware of and adapt to user-provided customization. Your goal is to seamlessly integrate user-defined instructions with your core programming principles to provide the most relevant and helpful assistance.

Instruction Files

If the workspace contains instruction files (e.g., .github/copilot-instructions.md, **/*.instructions.md), their rules supplement or override these general guidelines. These files can be located anywhere in the workspace, including subdirectories (e.g., docs/feature-x.instructions.md). You should treat them as a primary source of truth for project-specific conventions, techno

@laiso
laiso / index.ts
Last active May 10, 2025 11:55
tltr MCP Server on Cloudflare Workers
import { McpAgent } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
import { GoogleGenerativeAI } from "@google/generative-ai";
import { Readability } from '@mozilla/readability';
import { parseHTML } from 'linkedom';
type Env = {
MyMCP: DurableObjectNamespace<MyMCP>;
GEMINI_API_KEY: string;
"""
This script processes conversation data from a JSON file, extracts messages,
and writes them to text files. It also creates a summary JSON file with a summary
of the conversations. The script is designed to be run as a command-line interface (CLI),
allowing the user to specify the input JSON file and output directory.
Usage:
python script_name.py /path/to/conversations.json /path/to/output_directory
"""
@laiso
laiso / askrepo.js
Last active April 21, 2024 05:38
send repo to Google Gemini API
const fs = require('fs');
const https = require('https');
const { execSync } = require('child_process');
const model = 'gemini-1.5-pro-latest';
function getGitTrackedFiles(basePath) {
const command = `git ls-files ${basePath}`;
try {
const stdout = execSync(command, { encoding: 'utf8' });
@orj-takizawa
orj-takizawa / fignum_by_doc.py
Last active February 6, 2024 10:04
Sphinxのビルダで、numbered_referenceの図表番号をdomument単位の連番にする
# 現在のSphinxでは図表番号やnumbered_referenceで参照する図表番号は
# サブセクションのレベルでリセットされるが、章(ドキュメント)単位での
# 連番に書き換える
def transform_fignumbers(app, doctree, docname)-> None:
fignumbers = app.env.toc_fignumbers
for docname in fignumbers.keys():
for figtype in fignumbers[docname].keys():
cnt = 1
for fig in fignumbers[docname][figtype]:
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 8, 2025 13:49
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@kconner
kconner / macOS Internals.md
Last active October 8, 2025 16:45
macOS Internals

macOS Internals

Understand your Mac and iPhone more deeply by tracing the evolution of Mac OS X from prelease to Swift. John Siracusa delivers the details.

Starting Points

How to use this gist

You've got two main options: