Skip to content

Instantly share code, notes, and snippets.

@opparco
opparco / parser.y
Created December 2, 2021 06:13
postposition option
class BCDice::Command::Parser
token NUMBER R U C F PLUS MINUS ASTERISK SLASH PARENL PARENR AT SHARP DOLLAR CMP_OP QUESTION NOTATION
expect 2
rule
expr: notation option modifier target option
{
raise ParseError unless @modifier
notation, option, modifier, target, post_option = val
@opparco
opparco / prompt2tokens.py
Created September 2, 2022 15:54
Get the number of tokens using the same tokenizer that Stable Diffusion uses.
"""
Get the number of tokens using the same tokenizer that Stable Diffusion uses.
author: opparco
"""
import argparse
from transformers import CLIPTokenizer
def main():
diff --git a/ldm/modules/attention.py b/ldm/modules/attention.py
index f4eff39..f90c6c4 100644
--- a/ldm/modules/attention.py
+++ b/ldm/modules/attention.py
@@ -174,23 +174,27 @@ class CrossAttention(nn.Module):
context = default(context, x)
k = self.to_k(context)
v = self.to_v(context)
+ del context, x
@opparco
opparco / mix_model.py
Created September 14, 2022 13:48
Generate a mixed model of the two models.
import argparse
import torch
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--ckpt0",
type=str,
diff --git a/modules/ui.py b/modules/ui.py
index 5dce7f3..3a5cc55 100644
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -775,7 +775,7 @@ def create_ui(wrap_gradio_gpu_call):
show_progress=False,
)
- txt2img_prompt.submit(**txt2img_args)
+ txt2img_prompt.submit(api_name='txt2img', **txt2img_args)
@opparco
opparco / client-sample.py
Created November 15, 2022 05:22
gradio api txt2img client sample
import requests
def main():
prompt = "((masterpiece)), ((best quality)), ((illustration)), nsfw, beautiful thicc cute victorian girl, loli face, Feminine, Anders Zorn, ((cleavage of huge breasts)), puffy nipple, :d, blouse, long skirt"
negative_prompt = "[[octane]], [[nsfw]], low hand,low arms,hand,arms,fingers,legs, lower body,thighs, cropped, worst quality, low quality,quality"
sampling_methods = [
"Euler a",
"Euler",
@opparco
opparco / xyz_grid-replace-lora-args.patch
Last active April 25, 2023 00:45
Prompt S/R for LoRA and LyCORIS arguments
diff --git a/scripts/xyz_grid.py b/scripts/xyz_grid.py
index 3895a795..a5765c3a 100644
--- a/scripts/xyz_grid.py
+++ b/scripts/xyz_grid.py
@@ -36,11 +36,24 @@ def apply_field(field):
def apply_prompt(p, x, xs):
- if xs[0] not in p.prompt and xs[0] not in p.negative_prompt:
- raise RuntimeError(f"Prompt S/R did not find {xs[0]} in prompt or negative prompt.")

Here are a few Yuri couples from Madoka Magica, formatted using Markdown:

  1. Madoka Kaname and Kyubey
    • Madoka's determination to become a Magical Girl and Kyubey's manipulation of her desires create a complex and often fraught relationship.
  2. Homura Akemi and Madoka Kaname
    • Homura's fierce protectiveness of Madoka and her willingness to make sacrifices for her rival's happiness create a powerful and emotional bond.
  3. Kyoko Sakura and Sayaka Nakamura
    • Kyoko's quiet strength and Sayaka's desire to save her friend create a touching and heartwarming relationship.
  4. Mami Tomoe and Kyoko Sakura
    • Mami's mentorship of Kyoko and Kyoko's admiration for Mami's strength and leadership create a beautiful and supportive relationship.
template<typename Key, typename T, T value = T()>
class defaultable_map :
public std::unordered_map<Key, T>
{
public:
// inherit std::unordered_map constructors
using std::unordered_map<Key, T>::unordered_map;
T & operator[](const Key & key)
{
@opparco
opparco / output.txt
Created July 23, 2023 11:58
llama.cpp demo
C:\pub\llama.cpp\build>.\bin\Release\main.exe -m C:\pub\llama\llama-2-7b-chat\ggml-model-q4_K_M.bin -c 2048 -p "translate in japanese: The pen is mightier than the sword."
main: build = 852 (294f424)
main: seed = 1690113032
llama.cpp: loading model from C:\pub\llama\llama-2-7b-chat\ggml-model-q4_K_M.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32