Skip to content

Instantly share code, notes, and snippets.

@avidale
Created April 30, 2021 21:51
Show Gist options
  • Save avidale/44cd35bfcdaf8bedf51d97c468cc8001 to your computer and use it in GitHub Desktop.
Save avidale/44cd35bfcdaf8bedf51d97c468cc8001 to your computer and use it in GitHub Desktop.
create_rut5-base.ipynb
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@avidale
Copy link
Author

avidale commented Nov 16, 2023

Now it looks like a problem with incorrect input.
But again, without knowing the exact code that led to the error, I cannot say for sure.

@Sandeep0408
Copy link

Would this link of gist help : https://gist.github.com/Sandeep0408/236b164cb09408c920aedb15d5c7e984

If not, I can give you the access for the colab notebook via mail. Thanks!

@WEXIJUE
Copy link

WEXIJUE commented Nov 20, 2023

Hello, I would like to know what version of python you are using, I saved the model as model.safetensors instead of pytorch_model.bin, please do you have any solution, thank you very much

@Nehc
Copy link

Nehc commented Mar 2, 2024

Should this work with XLMRobertaModel, like e5-large? Or is something fundamentally different being used there. It didn't work out for me.

@avidale
Copy link
Author

avidale commented Mar 2, 2024

@Nehc

Should this work with XLMRobertaModel, like e5-large? Or is something fundamentally different being used there. It didn't work out for me.

As I can judge from the HF documentation, XLMRobertaTokenizer is based on SentencePiece, just like T5Tokenizer. Thus, in principle, the approach should work; I don't see any fundamental reasons why it wouldn't.

Nevertheless, the specific details, such as model parameter names, tokenizer parameter names, special tokens etc. may differ between T5 and XLMRoberta, so my code will surely need some adaptation to work with E5.

@MuaazAnsari
Copy link

Hi @avidale . Hope your are doing great! Great work, Indeed! I have implemented the above code in Urdu language and want to finetune it for text summarisation . I just wanted to know about the arguments which needs to passed while training ( For example, learning rate, weight decay etc) . Given below are the training args for finetuning the mt5-base model on urdu dataset. What is your suggestion on the argument values for the reduced model (39.9% parameters of the original model). That would be a great help.

from transformers import Seq2SeqTrainingArguments

batch_size = 8
num_train_epochs = 8

Show the training loss with every epoch

logging_steps = len(tokenized_datasets["train"]) // batch_size
model_name = model_checkpoint.split("/")[-1]

args = Seq2SeqTrainingArguments(
output_dir=f"{model_name}-finetuned_urdu_mt5-base",
evaluation_strategy="epoch",
learning_rate=5.6e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=num_train_epochs,
predict_with_generate=True,
logging_steps=logging_steps,
push_to_hub=True,
)

I tried using the same args on the reduced model and it got overfit.

@avidale
Copy link
Author

avidale commented Nov 25, 2024

Hi @MuaazAnsari!
I have two comments on your question:

  1. I don't expect the optimal hyperparameter values to depend on whether the model vocabulary has been pruned or not.
  2. Unfortunately, they do depend on the dataset that you train on (including the input and output lengths, dataset size, and task difficulty) and your hardware (e.g. if your GPU memory is limited, you'd have to decrease batch size, but then you might want to decrease learning rate or increase gradient accumulation to compensate). Thus, I cannot suggest parameters that would be universally good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment