Skip to content

Instantly share code, notes, and snippets.

@GrahamcOfBorg
Created December 31, 2024 18:02
Show Gist options
  • Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.
Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.
@Wojciech1985
Copy link

@Wojciech1985
Copy link

from transformers import BloomForCausalLM, BloomTokenizerFast

Załaduj model i tokenizer

model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")

Przykład generowania tekstu

input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)

Dekoduj wynik

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

@Wojciech1985
Copy link

100000 usd

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment