This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(eleuther) [[email protected] ~/projects/lm-evaluation-harness (multimodal-prototyping)]$ lm_eval --model hf-multimodal --tasks mmmu --batch_size 8 --model_args pretrained=microsoft/Florence-2-large --trust_remote_code | |
2024-08-05:14:10:51,211 INFO [__main__.py:272] Verbosity set to INFO | |
2024-08-05:14:10:51,397 INFO [__init__.py:406] `group` and `group_alias` keys in tasks' configs will no longer be used in the next release of lm-eval. `tag` will be used to allow to call a collection of tasks just like `group`. `group` will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations. | |
2024-08-05:14:10:57,014 INFO [__main__.py:357] Passed `--trust_remote_code`, setting environment variable `HF_DATASETS_TRUST_REMOTE_CODE=true` | |
2024-08-05:14:10:57,014 INFO [__main__.py:369] Selected Tasks: ['mmmu'] | |
2024-08-05:14:10:57,016 INFO [evaluator.py:158] Setting random seed to 0 | Setting numpy |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(eleuther) [[email protected] ~/projects/lm-evaluation-harness (multimodal-prototyping)]$ lm_eval --model hf-multimodal --tasks mmmu --batch_size 8 --model_args pretrained=OpenGVLab/InternVL2-8B --trust_remote_code | |
2024-08-05:14:08:04,206 INFO [__main__.py:272] Verbosity set to INFO | |
2024-08-05:14:08:04,392 INFO [__init__.py:406] `group` and `group_alias` keys in tasks' configs will no longer be used in the next release of lm-eval. `tag` will be used to allow to call a collection of tasks just like `group`. `group` will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations. | |
2024-08-05:14:08:09,941 INFO [__main__.py:357] Passed `--trust_remote_code`, setting environment variable `HF_DATASETS_TRUST_REMOTE_CODE=true` | |
2024-08-05:14:08:09,941 INFO [__main__.py:369] Selected Tasks: ['mmmu'] | |
2024-08-05:14:08:09,943 INFO [evaluator.py:158] Setting random seed to 0 | Setting numpy seed |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(eleuther) [[email protected] ~/projects/lm-evaluation-harness (multimodal-prototyping)]$ lm_eval --model hf-multimodal --tasks mmmu --batch_size 8 --model_args pretrained=vikhyatk/moondream2 --trust_remote_code | |
2024-08-05:13:54:15,773 INFO [__main__.py:272] Verbosity set to INFO | |
2024-08-05:13:54:15,965 INFO [__init__.py:406] `group` and `group_alias` keys in tasks' configs will no longer be used in the next release of lm-eval. `tag` will be used to allow to call a collection of tasks just like `group`. `group` will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations. | |
2024-08-05:13:54:21,582 INFO [__main__.py:357] Passed `--trust_remote_code`, setting environment variable `HF_DATASETS_TRUST_REMOTE_CODE=true` | |
2024-08-05:13:54:21,582 INFO [__main__.py:369] Selected Tasks: ['mmmu'] | |
2024-08-05:13:54:21,583 INFO [evaluator.py:158] Setting random seed to 0 | Setting numpy seed to |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(eleuther) [[email protected] ~/projects/lm-evaluation-harness (multimodal-prototyping)]$ lm_eval --model hf-multimodal --tasks mmmu --batch_size 8 --model_args pretrained=facebook/chameleon-7b | |
2024-08-05:12:52:28,270 INFO [__main__.py:272] Verbosity set to INFO | |
2024-08-05:12:52:28,451 INFO [__init__.py:406] `group` and `group_alias` keys in tasks' configs will no longer be used in the next release of lm-eval. `tag` will be used to allow to call a collection of tasks just like `group`. `group` will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations. | |
2024-08-05:12:52:33,895 INFO [__main__.py:369] Selected Tasks: ['mmmu'] | |
2024-08-05:12:52:33,897 INFO [evaluator.py:158] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 | |
2024-08-05:12:52:33,897 INFO [evaluator.py:195] Initializing hf-multimodal model, with arguments: {'pretrained': 'faceb |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(eleuther) [[email protected] ~/projects/lm-evaluation-harness (multimodal-prototyping)]$ lm_eval --model hf-multimodal --tasks mmmu --batch_size 1 --model_args pretrained=google/paligemma-3b-pt-224 | |
2024-08-05:12:30:21,563 INFO [__main__.py:272] Verbosity set to INFO | |
2024-08-05:12:30:21,743 INFO [__init__.py:406] `group` and `group_alias` keys in tasks' configs will no longer be used in the next release of lm-eval. `tag` will be used to allow to call a collection of tasks just like `group`. `group` will be removed in order to not cause confusion with the new ConfigurableGroup which will be the offical way to create groups with addition of group-wide configuations. | |
2024-08-05:12:30:27,238 INFO [__main__.py:369] Selected Tasks: ['mmmu'] | |
2024-08-05:12:30:27,240 INFO [evaluator.py:158] Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 | |
2024-08-05:12:30:27,240 INFO [evaluator.py:195] Initializing hf-multimodal model, with arguments: {'pretrained': ' |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from typing import Tuple | |
import torch | |
import math | |
from torchtune.utils.seed import set_seed | |
# Seed everything the same way we will in the actual test | |
set_seed(0) | |
def apply_scaling(freqs: torch.Tensor): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import time | |
from functools import partial | |
########## | |
from typing import Any, Dict, List, Optional | |
import psutil | |
import torch | |
from torch.nn import functional as F |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from torchtune.models.llama3 import llama3_tokenizer | |
from torchtune.datasets import instruct_dataset | |
tokenizer = llama3_tokenizer("./model/original/tokenizer.model") | |
dataset = instruct_dataset( | |
tokenizer=tokenizer, | |
source="TIGER-Lab/WebInstructSub", | |
template="torchtune.data.AlpacaInstructTemplate", | |
column_map={ | |
"instruction": "question", |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
model = TransformerDecoder(...) | |
model.setup_caches(bsz=8, dtype=torch.float) | |
generate_0(...) | |
def generate_0( | |
model: TransformerDecoder, | |
prompt: torch.Tensor, | |
... | |
) -> torch.Tensor: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
EX_MESSAGE = [ | |
{ | |
"from": "system", | |
"value": "You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.", | |
}, | |
{ | |
"from": "human", | |
"value": "Definition: In this task, you are given a hateful post in Bengali that expresses hate or encourages violence in a geopolitical context based on the protected characteristics such as race, religion, sex, and sexual orientation. You are expected to classify the post into two classes: geopolitical or non-geopolitical depending on the topic.\nInput: এই রকম একটা মাল রেন্ডিয়ায় আছে নাকি রে? পাকিস্তান কে ঘৃনা করলেও তাদের এসব পারমানবিকের কাছে তুরাই তাল হারাবি\nOutput:", | |
}, | |
{ |
NewerOlder