MoritzLaurer HF staff commited on
Commit
65d182d
·
verified ·
1 Parent(s): c4c7183

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -40
README.md CHANGED
@@ -7,40 +7,34 @@ tags:
7
  metrics:
8
  - accuracy
9
  model-index:
10
- - name: ModernBERT-large-zeroshot-v2.0-2024-12-28-00-13
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # ModernBERT-large-zeroshot-v2.0-2024-12-28-00-13
18
-
19
- This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.1803
22
- - F1 Macro: 0.6624
23
- - F1 Micro: 0.7304
24
- - Accuracy Balanced: 0.6979
25
- - Accuracy: 0.7304
26
- - Precision Macro: 0.6899
27
- - Recall Macro: 0.6979
28
- - Precision Micro: 0.7304
29
- - Recall Micro: 0.7304
30
 
31
  ## Model description
32
 
33
- More information needed
 
34
 
35
- ## Intended uses & limitations
36
 
37
- More information needed
 
 
 
 
 
38
 
39
- ## Training and evaluation data
40
 
41
- More information needed
 
 
 
 
 
 
42
 
43
- ## Training procedure
44
 
45
  ### Training hyperparameters
46
 
@@ -56,23 +50,6 @@ The following hyperparameters were used during training:
56
  - lr_scheduler_warmup_ratio: 0.06
57
  - num_epochs: 2
58
 
59
- ### Training results
60
-
61
- | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | Accuracy Balanced | Accuracy | Precision Macro | Recall Macro | Precision Micro | Recall Micro |
62
- |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:|
63
- | 0.3865 | 1.0 | 33915 | 0.3321 | 0.8584 | 0.8704 | 0.8600 | 0.8704 | 0.8569 | 0.8600 | 0.8704 | 0.8704 |
64
- | 0.2456 | 2.0000 | 67828 | 0.4069 | 0.8600 | 0.8728 | 0.8590 | 0.8728 | 0.8610 | 0.8590 | 0.8728 | 0.8728 |
65
-
66
-
67
- Breakdown by dataset
68
-
69
- |Datasets|Mean|Mean w/o NLI|mnli_m|mnli_mm|fevernli|anli_r1|anli_r2|anli_r3|wanli|lingnli|wellformedquery|rottentomatoes|amazonpolarity|imdb|yelpreviews|hatexplain|massive|banking77|emotiondair|emocontext|empathetic|agnews|yahootopics|biasframes_sex|biasframes_offensive|biasframes_intent|financialphrasebank|appreviews|hateoffensive|trueteacher|spam|wikitoxic_toxicaggregated|wikitoxic_obscene|wikitoxic_identityhate|wikitoxic_threat|wikitoxic_insult|manifesto|capsotu|
70
- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
71
- |Accuracy|0.85|0.851|0.942|0.944|0.894|0.812|0.717|0.716|0.836|0.909|0.815|0.899|0.964|0.951|0.984|0.814|0.8|0.744|0.752|0.802|0.544|0.899|0.735|0.934|0.864|0.877|0.913|0.953|0.921|0.821|0.989|0.901|0.927|0.931|0.959|0.911|0.497|0.73|
72
- |F1 macro|0.834|0.835|0.935|0.938|0.882|0.795|0.688|0.676|0.823|0.898|0.814|0.899|0.964|0.951|0.984|0.77|0.753|0.763|0.69|0.805|0.533|0.899|0.729|0.925|0.864|0.877|0.901|0.953|0.855|0.821|0.983|0.901|0.927|0.931|0.952|0.911|0.362|0.662|
73
- |Inference text/sec (GPU, batch=32)|1116.0|1104.0|1039.0|1241.0|1138.0|1102.0|1124.0|1133.0|1251.0|1240.0|1263.0|1231.0|1054.0|559.0|795.0|1238.0|1312.0|1285.0|1273.0|1268.0|992.0|1222.0|894.0|1176.0|1194.0|1197.0|1206.0|1166.0|1227.0|541.0|1199.0|1045.0|1054.0|1020.0|1005.0|1063.0|1214.0|1220.0|
74
-
75
-
76
 
77
  ### Framework versions
78
 
 
7
  metrics:
8
  - accuracy
9
  model-index:
10
+ - name: ModernBERT-large-zeroshot-v2.0
11
  results: []
12
  ---
13
 
14
+ # ModernBERT-base-zeroshot-v2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ## Model description
17
 
18
+ This model is [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
19
+ fine-tuned on the same dataset mix as the `zeroshot-v2.0` models in the [Zeroshot Classifiers Collection](https://huggingface.co/collections/MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f).
20
 
 
21
 
22
+ ## General takeaways:
23
+ - The model is very fast and memory efficient. It's multiple times faster and consumes multiple times less memory than DeBERTav3.
24
+ The memory efficiency enables larger batch sizes. I got a ~2x speed increase by enabling bf16 (instead of fp16).
25
+ - It performs slightly worse then DeBERTav3 on average on the tasks tested below.
26
+ - I'm in the process of preparing a newer version trained on better synthetic data to make full use of the 8k context window
27
+ and to update the training mix of the older `zeroshot-v2.0` models.
28
 
 
29
 
30
+ ### Training results
31
+
32
+ |Datasets|Mean|Mean w/o NLI|mnli_m|mnli_mm|fevernli|anli_r1|anli_r2|anli_r3|wanli|lingnli|wellformedquery|rottentomatoes|amazonpolarity|imdb|yelpreviews|hatexplain|massive|banking77|emotiondair|emocontext|empathetic|agnews|yahootopics|biasframes_sex|biasframes_offensive|biasframes_intent|financialphrasebank|appreviews|hateoffensive|trueteacher|spam|wikitoxic_toxicaggregated|wikitoxic_obscene|wikitoxic_identityhate|wikitoxic_threat|wikitoxic_insult|manifesto|capsotu|
33
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
34
+ |Accuracy|0.85|0.851|0.942|0.944|0.894|0.812|0.717|0.716|0.836|0.909|0.815|0.899|0.964|0.951|0.984|0.814|0.8|0.744|0.752|0.802|0.544|0.899|0.735|0.934|0.864|0.877|0.913|0.953|0.921|0.821|0.989|0.901|0.927|0.931|0.959|0.911|0.497|0.73|
35
+ |F1 macro|0.834|0.835|0.935|0.938|0.882|0.795|0.688|0.676|0.823|0.898|0.814|0.899|0.964|0.951|0.984|0.77|0.753|0.763|0.69|0.805|0.533|0.899|0.729|0.925|0.864|0.877|0.901|0.953|0.855|0.821|0.983|0.901|0.927|0.931|0.952|0.911|0.362|0.662|
36
+ |Inference text/sec (A100 40GB GPU, batch=32)|1116.0|1104.0|1039.0|1241.0|1138.0|1102.0|1124.0|1133.0|1251.0|1240.0|1263.0|1231.0|1054.0|559.0|795.0|1238.0|1312.0|1285.0|1273.0|1268.0|992.0|1222.0|894.0|1176.0|1194.0|1197.0|1206.0|1166.0|1227.0|541.0|1199.0|1045.0|1054.0|1020.0|1005.0|1063.0|1214.0|1220.0|
37
 
 
38
 
39
  ### Training hyperparameters
40
 
 
50
  - lr_scheduler_warmup_ratio: 0.06
51
  - num_epochs: 2
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  ### Framework versions
55