Skip to content

Instantly share code, notes, and snippets.

View iqiancheng's full-sized avatar
🌴
On vacation

千橙 iqiancheng

🌴
On vacation
View GitHub Profile

1. 常用药物

感冒/发烧类

  • 布洛芬(退烧、止痛)
  • 对乙酰氨基酚(泰诺)

抗病毒/流感类

  • 奥司他韦(达菲,需处方)

消炎类

  • 阿莫西林(需处方)
  • 红霉素软膏(外用)
@liuyunbin
liuyunbin / gist:b6b820ecca264e2768e6574dc4235763
Last active February 26, 2025 13:55
command-use-proxy.md

说明

如何让命令使用代理服务器

前提

# 1. 代理类型
* http ------ 远程域名解析 -- 建议
* socks  ----
* socks4 ---- 本地域名解析
* socks4a --- 远程域名解析 -- 建议
@yoavg
yoavg / multi-llm-agents.md
Last active April 24, 2025 06:15
What makes multi-agent LLM systems multi-agent?

Are multi-LLM-agent systems a thing? Yes they are. But.

Yoav Goldberg, Nov 24, 2024

This piece started with a pair of twitter and bluesky posts:

let's talk about "agents" (in the LLM sense). there's a lot of buzz around "multi-agent" systems where agents collaborate but... i don't really get how it differs from a thinking of a single agent with multiple modes of operation. what are the benefits of modeling as multi-agent?

— (((ل()(ل() 'yoav))))👾 (@yoavgo) November 23, 2024
@sayakpaul
sayakpaul / inference.md
Last active February 5, 2025 14:13
(Not so rigrously tested) example showing how to use `bitsandbytes`, `peft`, etc. to LoRA fine-tune Flux.1 Dev.

When loading the LoRA params (that were obtained on a quantized base model) and merging them into the base model, it is recommended to first dequantize the base model, merge the LoRA params into it, and then quantize the model again. This is because merging into 4bit quantized models can lead to some rounding errors. Below, we provide an end-to-end example:

  1. First, load the original model and merge the LoRA params into it:
from diffusers import FluxPipeline 
import torch 

ckpt_id = "black-forest-labs/FLUX.1-dev"
pipeline = FluxPipeline.from_pretrained(
@f1shy-dev
f1shy-dev / best_SAE_trick.md
Last active May 3, 2025 05:44
sneakyf1shy's apple intelligence tutorial

the sneakyf1shy apple intelligence tutorial v2.0

Warning

This is patched as of iOS/iPadOS 18.1 DevBeta 5. If you want to follow this, stay on Beta 4.

This actually downloads the models, and is NOT just new SiriUI. Hence, this process is complex and probably not worth it.

⚠️ Prepare to be disappointed and annoyed, and have your time wasted! ⚠️

  • What does not work: Writing Tools, Memories, Reduce Interruptions, Image Eraser and other tools that are within official Apple Intelligence on supported devices.
@yoavg
yoavg / instruct-to-not-hallucinate.md
Created September 9, 2024 20:23
Is telling a model to "not hallucinate" absurd?

Is telling a model to "not hallucinate" absurd?

Can you tell an LLM "don't hallucinate" and expect it to work? my gut reaction was "oh this is so silly" but upon some reflection, it really isn't. There is actually no reason why it shouldn't work, especially if it was preference-fine-tuned on instructions with "don't hallucinate" in them, and if it a recent commercial model, it likely was.

What does an LLM need in order to follow an instruction? It needs two things:

  1. an ability to perform then task. Something in its parameters/mechanism should be indicative of the task objective, in a way that can be influenced. (In our case, it should "know" when it hallucinates, and/or should be able to change or adapt its behavior to reduce the chance of hallucinations.)
  2. an ability to ground the instruction: the model should be able to associate the requested behavior with its parameters/mechanisms. (In our case, the model should associate "don't hallucinate" with the behavior related to 1).
@ImN1
ImN1 / url_tagger.user.js
Created August 21, 2024 08:24
URL Tagger
// ==UserScript==
// @name URL Tagger
// @namespace http://tampermonkey.net/
// @version 1.5
// @description Tag URLs based on predefined patterns and data
// @author Your Name
// @match *://*/*
// @grant GM_xmlhttpRequest
// @grant GM_addStyle
// @connect 192.168.x.x
@AmericanPresidentJimmyCarter
AmericanPresidentJimmyCarter / flux_lora_cfg.py
Created August 10, 2024 19:30
Use your flux-dev LoRA with a quantized model and CFG in <16gb VRAM
import inspect
from typing import Any, Callable, Dict, List, Optional, Union
import numpy as np
import torch
from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5TokenizerFast
from diffusers.image_processor import VaeImageProcessor
from diffusers.loaders import FluxLoraLoaderMixin
from diffusers.models.autoencoders import AutoencoderKL
@ImN1
ImN1 / grouped_similar_filenames.py
Created June 17, 2024 12:02
grouped similar filenames
def groupedSimilarFilenames(filenames):
'''
将相似文件名分组\n
输出类似格式\n
filenames preffix suffix size \n
0 cover.jpg NaN NaN 0 \n
1 top.png NaN NaN 0 \n
2 9.jpg NaN .jpg 0 \n
3 015a.jpg 0 a.jpg 1 \n
@sayakpaul
sayakpaul / run_sd3_8bit.py
Last active November 25, 2024 21:50
The code snippet shows how to run Stable Diffusion 3 with a 8bit T5-xxl, drastically reducing the memory requirements.
from diffusers import StableDiffusion3Pipeline
from transformers import T5EncoderModel
import torch
import time
import gc
def flush():
gc.collect()
torch.cuda.empty_cache()