| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---|---|---|---|---|---|
| dolphin-2.8-mistral-7b-v02 | 38.99 | 72.22 | 51.96 | 40.41 | 50.9 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 21.65 | ± | 2.59 |
| acc_norm | 20.47 | ± | 2.54 | ||
| agieval_logiqa_en | 0 | acc | 35.79 | ± | 1.88 |
| acc_norm | 36.10 | ± | 1.88 | ||
| agieval_lsat_ar | 0 | acc | 20.43 | ± | 2.66 |
| acc_norm | 20.00 | ± | 2.64 | ||
| agieval_lsat_lr | 0 | acc | 37.65 | ± | 2.15 |
| acc_norm | 38.43 | ± | 2.16 | ||
| agieval_lsat_rc | 0 | acc | 53.90 | ± | 3.04 |
| acc_norm | 50.56 | ± | 3.05 | ||
| agieval_sat_en | 0 | acc | 74.27 | ± | 3.05 |
| acc_norm | 71.36 | ± | 3.16 | ||
| agieval_sat_en_without_passage | 0 | acc | 42.23 | ± | 3.45 |
| acc_norm | 43.20 | ± | 3.46 | ||
| agieval_sat_math | 0 | acc | 35.91 | ± | 3.24 |
| acc_norm | 31.82 | ± | 3.15 |
Average: 38.99%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 52.90 | ± | 1.46 |
| acc_norm | 55.97 | ± | 1.45 | ||
| arc_easy | 0 | acc | 82.32 | ± | 0.78 |
| acc_norm | 79.12 | ± | 0.83 | ||
| boolq | 1 | acc | 86.33 | ± | 0.60 |
| hellaswag | 0 | acc | 62.54 | ± | 0.48 |
| acc_norm | 81.13 | ± | 0.39 | ||
| openbookqa | 0 | acc | 32.80 | ± | 2.10 |
| acc_norm | 45.00 | ± | 2.23 | ||
| piqa | 0 | acc | 80.90 | ± | 0.92 |
| acc_norm | 83.24 | ± | 0.87 | ||
| winogrande | 0 | acc | 74.74 | ± | 1.22 |
Average: 72.22%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 35.13 | ± | 1.67 |
| mc2 | 51.96 | ± | 1.49 |
Average: 51.96%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 54.21 | ± | 3.62 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 68.02 | ± | 2.43 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 39.53 | ± | 3.05 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 20.33 | ± | 2.13 |
| exact_str_match | 0.00 | ± | 0.00 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 27.40 | ± | 2.00 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 19.86 | ± | 1.51 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 45.00 | ± | 2.88 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 36.00 | ± | 2.15 |
| bigbench_navigate | 0 | multiple_choice_grade | 50.40 | ± | 1.58 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 62.25 | ± | 1.08 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 39.51 | ± | 2.31 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 26.55 | ± | 1.40 |
| bigbench_snarks | 0 | multiple_choice_grade | 63.54 | ± | 3.59 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 61.66 | ± | 1.55 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 29.50 | ± | 1.44 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.24 | ± | 1.18 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 16.34 | ± | 0.88 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 45.00 | ± | 2.88 |
Average: 40.41%
Average score: 50.9%
Elapsed time: 02:20:46