-
-
Save GrahamcOfBorg/65ac7a9e62691bf866733ebc1ea95f07 to your computer and use it in GitHub Desktop.
Maintainers: |
from transformers import BloomForCausalLM, BloomTokenizerFast
Załaduj model i tokenizer
model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
Przykład generowania tekstu
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)
Dekoduj wynik
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
from transformers import BloomForCausalLM, BloomTokenizerFast
Załaduj model i tokenizer
model = BloomForCausalLM.from_pretrained("bigscience/bloom")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom")
Przykład generowania tekstu
input_text = "Hello, how are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)
Dekoduj wynik
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
100000 usd
https://gist.github.com/GrahamcOfBorg/fe6fd64579db2302b7c736c6787f8e2d.js