Skip to content

Instantly share code, notes, and snippets.

@opparco
opparco / debug-tokenizer-weblab-10b.py
Created August 18, 2023 11:20
debug tokenizer of matsuo-lab/weblab-10b
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("matsuo-lab/weblab-10b")
dot = tokenizer.encode(".")
print(dot)
# [15]
nemureru = tokenizer.encode("眠れる")
#
filename = '../grammars/japanese.gbnf'
print(f"overwrite {filename}")
with open(filename, 'w', encoding='utf-8') as f:
f.write("""#
root ::= char+ ([ \\t\\n] char+)*
char ::= [\u3000-\u303F\u3040-\u309F\u30A0-\u30FF\u4E00-\u9FFF\uFF21-\uFF3A\uFF41-\uFF5A\uFF10-\uFF19]""")
@opparco
opparco / output.txt
Created July 23, 2023 11:58
llama.cpp demo
C:\pub\llama.cpp\build>.\bin\Release\main.exe -m C:\pub\llama\llama-2-7b-chat\ggml-model-q4_K_M.bin -c 2048 -p "translate in japanese: The pen is mightier than the sword."
main: build = 852 (294f424)
main: seed = 1690113032
llama.cpp: loading model from C:\pub\llama\llama-2-7b-chat\ggml-model-q4_K_M.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
template<typename Key, typename T, T value = T()>
class defaultable_map :
public std::unordered_map<Key, T>
{
public:
// inherit std::unordered_map constructors
using std::unordered_map<Key, T>::unordered_map;
T & operator[](const Key & key)
{

Here are a few Yuri couples from Madoka Magica, formatted using Markdown:

  1. Madoka Kaname and Kyubey
    • Madoka's determination to become a Magical Girl and Kyubey's manipulation of her desires create a complex and often fraught relationship.
  2. Homura Akemi and Madoka Kaname
    • Homura's fierce protectiveness of Madoka and her willingness to make sacrifices for her rival's happiness create a powerful and emotional bond.
  3. Kyoko Sakura and Sayaka Nakamura
    • Kyoko's quiet strength and Sayaka's desire to save her friend create a touching and heartwarming relationship.
  4. Mami Tomoe and Kyoko Sakura
    • Mami's mentorship of Kyoko and Kyoko's admiration for Mami's strength and leadership create a beautiful and supportive relationship.
@opparco
opparco / xyz_grid-replace-lora-args.patch
Last active April 25, 2023 00:45
Prompt S/R for LoRA and LyCORIS arguments
diff --git a/scripts/xyz_grid.py b/scripts/xyz_grid.py
index 3895a795..a5765c3a 100644
--- a/scripts/xyz_grid.py
+++ b/scripts/xyz_grid.py
@@ -36,11 +36,24 @@ def apply_field(field):
def apply_prompt(p, x, xs):
- if xs[0] not in p.prompt and xs[0] not in p.negative_prompt:
- raise RuntimeError(f"Prompt S/R did not find {xs[0]} in prompt or negative prompt.")
@opparco
opparco / client-sample.py
Created November 15, 2022 05:22
gradio api txt2img client sample
import requests
def main():
prompt = "((masterpiece)), ((best quality)), ((illustration)), nsfw, beautiful thicc cute victorian girl, loli face, Feminine, Anders Zorn, ((cleavage of huge breasts)), puffy nipple, :d, blouse, long skirt"
negative_prompt = "[[octane]], [[nsfw]], low hand,low arms,hand,arms,fingers,legs, lower body,thighs, cropped, worst quality, low quality,quality"
sampling_methods = [
"Euler a",
"Euler",
diff --git a/modules/ui.py b/modules/ui.py
index 5dce7f3..3a5cc55 100644
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -775,7 +775,7 @@ def create_ui(wrap_gradio_gpu_call):
show_progress=False,
)
- txt2img_prompt.submit(**txt2img_args)
+ txt2img_prompt.submit(api_name='txt2img', **txt2img_args)
@opparco
opparco / mix_model.py
Created September 14, 2022 13:48
Generate a mixed model of the two models.
import argparse
import torch
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt0",
type=str,
diff --git a/ldm/modules/attention.py b/ldm/modules/attention.py
index f4eff39..f90c6c4 100644
--- a/ldm/modules/attention.py
+++ b/ldm/modules/attention.py
@@ -174,23 +174,27 @@ class CrossAttention(nn.Module):
context = default(context, x)
k = self.to_k(context)
v = self.to_v(context)
+ del context, x