First, it is you need to create a conda or venv environment. This mamba.yml file contains the packages:
name: QAFactEval
channels:
- conda-forge
dependencies:
- python=3.8.18=hd12c33a_0_cpython
- spacy=2.2.4
- spacy-model-en_core_web_sm=2.2.5
| from transformers import PreTrainedModel, PretrainedConfig, PreTrainedTokenizer, BatchEncoding | |
| from transformers.modeling_outputs import Seq2SeqLMOutput | |
| import torch | |
| class FakeTransformerConfig(PretrainedConfig): | |
| model_type = "FakeTransformer" | |
| def __init__(self, vocab_size=4, **kwargs): | |
| super().__init__(pad_token_id=-1, eos_token_id=3, bos_token_id=0, **kwargs) |
| from decoders import inject_supervitamined_decoders, StochasticBeamSearchDecoder, FakeTransformer | |
| from transformers import T5ForConditionalGeneration, T5Tokenizer | |
| import torch | |
| # pip install decoders | |
| # this demonstration uses a fake toy transformer (https://manueldeprada.com/blog/posts/toy-probabilistic-transformer/) | |
| # to test the correctness of the stochastic beam search implementation | |
| def test_fake_transformer(): |
First, it is you need to create a conda or venv environment. This mamba.yml file contains the packages:
name: QAFactEval
channels:
- conda-forge
dependencies:
- python=3.8.18=hd12c33a_0_cpython
- spacy=2.2.4
- spacy-model-en_core_web_sm=2.2.5
| #!/usr/bin/python3 | |
| import re | |
| import signal | |
| import dbus | |
| from gi.repository import GLib | |
| from dbus.mainloop.glib import DBusGMainLoop | |
| import gi | |
| gi.require_version('Gst', '1.0') |
We’re seeing repeated CI failures in a fresh container when tests make live HTTP calls. Example from today, 5 or so rerun failures on:
tests/models/pix2struct/test_image_processing_pix2struct.py::Pix2StructImageProcessingTest::test_expected_patchestests/models/pix2struct/test_image_processing_pix2struct.py::Pix2StructImageProcessingTest::test_call_vqa