Bart is one of the first Seq2Seq models in the library, and achieves state of the art results on text generation tasks, like abstractive summarization. Three sets of pretrained weights are released:
bart-large
: the pretrained base modelbart-large-cnn
: the base model finetuned on the CNN/Daily Mail Abstractive Summarization Taskbart-large-mnli
: the base model finetuned on the MNLI classification task.
Related: