harshhpareek mamta commited on
Commit
a3a7b38
·
0 Parent(s):

Duplicate from evaluate-metric/bertscore

Browse files

Co-authored-by: Mamta Narang <[email protected]>

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +136 -0
  3. app.py +6 -0
  4. bertscore.py +215 -0
  5. requirements.txt +2 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: BERT Score
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 3.0.2
8
+ app_file: app.py
9
+ pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ BERTScore leverages the pre-trained contextual embeddings from BERT and
15
+ matches words in candidate and reference sentences by cosine similarity. It
16
+ has been shown to correlate with human judgment on sentence-level and
17
+ system-level evaluation. Moreover, BERTScore computes precision, recall, and
18
+ F1 measure, which can be useful for evaluating different language generation
19
+ tasks.
20
+
21
+ See the project's README at https://github.com/Tiiiger/bert_score#readme for
22
+ more information.
23
+ duplicated_from: evaluate-metric/bertscore
24
+ ---
25
+
26
+ # Metric Card for BERT Score
27
+
28
+ ## Metric description
29
+
30
+ BERTScore is an automatic evaluation metric for text generation that computes a similarity score for each token in the candidate sentence with each token in the reference sentence. It leverages the pre-trained contextual embeddings from [BERT](https://huggingface.co/bert-base-uncased) models and matches words in candidate and reference sentences by cosine similarity.
31
+
32
+ Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.
33
+
34
+ ## How to use
35
+
36
+ BERTScore takes 3 mandatory arguments : `predictions` (a list of string of candidate sentences), `references` (a list of strings or list of list of strings of reference sentences) and either `lang` (a string of two letters indicating the language of the sentences, in [ISO 639-1 format](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) or `model_type` (a string specififying which model to use, according to the BERT specification). The default behavior of the metric is to use the suggested model for the target language when one is specified, otherwise to use the `model_type` indicated.
37
+
38
+ ```python
39
+ from evaluate import load
40
+ bertscore = load("bertscore")
41
+ predictions = ["hello there", "general kenobi"]
42
+ references = ["hello there", "general kenobi"]
43
+ results = bertscore.compute(predictions=predictions, references=references, lang="en")
44
+ ```
45
+
46
+ BERTScore also accepts multiple optional arguments:
47
+
48
+
49
+ `num_layers` (int): The layer of representation to use. The default is the number of layers tuned on WMT16 correlation data, which depends on the `model_type` used.
50
+
51
+ `verbose` (bool): Turn on intermediate status update. The default value is `False`.
52
+
53
+ `idf` (bool or dict): Use idf weighting; can also be a precomputed idf_dict.
54
+
55
+ `device` (str): On which the contextual embedding model will be allocated on. If this argument is `None`, the model lives on `cuda:0` if cuda is available.
56
+
57
+ `nthreads` (int): Number of threads used for computation. The default value is `4`.
58
+
59
+ `rescale_with_baseline` (bool): Rescale BERTScore with the pre-computed baseline. The default value is `False`.
60
+
61
+ `batch_size` (int): BERTScore processing batch size, at least one of `model_type` or `lang`. `lang` needs to be specified when `rescale_with_baseline` is `True`.
62
+
63
+ `baseline_path` (str): Customized baseline file.
64
+
65
+ `use_fast_tokenizer` (bool): `use_fast` parameter passed to HF tokenizer. The default value is `False`.
66
+
67
+
68
+ ## Output values
69
+
70
+ BERTScore outputs a dictionary with the following values:
71
+
72
+ `precision`: The [precision](https://huggingface.co/metrics/precision) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
73
+
74
+ `recall`: The [recall](https://huggingface.co/metrics/recall) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
75
+
76
+ `f1`: The [F1 score](https://huggingface.co/metrics/f1) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
77
+
78
+ `hashcode:` The hashcode of the library.
79
+
80
+
81
+ ### Values from popular papers
82
+ The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) reported average model selection accuracies (Hits@1) on WMT18 hybrid systems for different language pairs, which ranged from 0.004 for `en<->tr` to 0.824 for `en<->de`.
83
+
84
+ For more recent model performance, see the [metric leaderboard](https://paperswithcode.com/paper/bertscore-evaluating-text-generation-with).
85
+
86
+ ## Examples
87
+
88
+ Maximal values with the `distilbert-base-uncased` model:
89
+
90
+ ```python
91
+ from evaluate import load
92
+ bertscore = load("bertscore")
93
+ predictions = ["hello world", "general kenobi"]
94
+ references = ["hello world", "general kenobi"]
95
+ results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
96
+ print(results)
97
+ {'precision': [1.0, 1.0], 'recall': [1.0, 1.0], 'f1': [1.0, 1.0], 'hashcode': 'distilbert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
98
+ ```
99
+
100
+ Partial match with the `bert-base-uncased` model:
101
+
102
+ ```python
103
+ from evaluate import load
104
+ bertscore = load("bertscore")
105
+ predictions = ["hello world", "general kenobi"]
106
+ references = ["goodnight moon", "the sun is shining"]
107
+ results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
108
+ print(results)
109
+ {'precision': [0.7380737066268921, 0.5584042072296143], 'recall': [0.7380737066268921, 0.5889028906822205], 'f1': [0.7380737066268921, 0.5732481479644775], 'hashcode': 'bert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
110
+ ```
111
+
112
+ ## Limitations and bias
113
+
114
+ The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) showed that BERTScore correlates well with human judgment on sentence-level and system-level evaluation, but this depends on the model and language pair selected.
115
+
116
+ Furthermore, not all languages are supported by the metric -- see the [BERTScore supported language list](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for more information.
117
+
118
+ Finally, calculating the BERTScore metric involves downloading the BERT model that is used to compute the score-- the default model for `en`, `roberta-large`, takes over 1.4GB of storage space and downloading it can take a significant amount of time depending on the speed of your internet connection. If this is an issue, choose a smaller model; for instance `distilbert-base-uncased` is 268MB. A full list of compatible models can be found [here](https://docs.google.com/spreadsheets/d/1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI/edit#gid=0).
119
+
120
+
121
+ ## Citation
122
+
123
+ ```bibtex
124
+ @inproceedings{bert-score,
125
+ title={BERTScore: Evaluating Text Generation with BERT},
126
+ author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
127
+ booktitle={International Conference on Learning Representations},
128
+ year={2020},
129
+ url={https://openreview.net/forum?id=SkeHuCVFDr}
130
+ }
131
+ ```
132
+
133
+ ## Further References
134
+
135
+ - [BERTScore Project README](https://github.com/Tiiiger/bert_score#readme)
136
+ - [BERTScore ICLR 2020 Poster Presentation](https://iclr.cc/virtual_2020/poster_SkeHuCVFDr.html)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("bertscore")
6
+ launch_gradio_widget(module)
bertscore.py ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ BERTScore metric. """
15
+
16
+ import functools
17
+ from contextlib import contextmanager
18
+
19
+ import bert_score
20
+ import datasets
21
+ from packaging import version
22
+
23
+ import evaluate
24
+
25
+
26
+ @contextmanager
27
+ def filter_logging_context():
28
+ def filter_log(record):
29
+ return False if "This IS expected if you are initializing" in record.msg else True
30
+
31
+ logger = datasets.utils.logging.get_logger("transformers.modeling_utils")
32
+ logger.addFilter(filter_log)
33
+ try:
34
+ yield
35
+ finally:
36
+ logger.removeFilter(filter_log)
37
+
38
+
39
+ _CITATION = """\
40
+ @inproceedings{bert-score,
41
+ title={BERTScore: Evaluating Text Generation with BERT},
42
+ author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
43
+ booktitle={International Conference on Learning Representations},
44
+ year={2020},
45
+ url={https://openreview.net/forum?id=SkeHuCVFDr}
46
+ }
47
+ """
48
+
49
+ _DESCRIPTION = """\
50
+ BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference
51
+ sentences by cosine similarity.
52
+ It has been shown to correlate with human judgment on sentence-level and system-level evaluation.
53
+ Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language
54
+ generation tasks.
55
+
56
+ See the project's README at https://github.com/Tiiiger/bert_score#readme for more information.
57
+ """
58
+
59
+ _KWARGS_DESCRIPTION = """
60
+ BERTScore Metrics with the hashcode from a source against one or more references.
61
+
62
+ Args:
63
+ predictions (list of str): Prediction/candidate sentences.
64
+ references (list of str or list of list of str): Reference sentences.
65
+ lang (str): Language of the sentences; required (e.g. 'en').
66
+ model_type (str): Bert specification, default using the suggested
67
+ model for the target language; has to specify at least one of
68
+ `model_type` or `lang`.
69
+ num_layers (int): The layer of representation to use,
70
+ default using the number of layers tuned on WMT16 correlation data.
71
+ verbose (bool): Turn on intermediate status update.
72
+ idf (bool or dict): Use idf weighting; can also be a precomputed idf_dict.
73
+ device (str): On which the contextual embedding model will be allocated on.
74
+ If this argument is None, the model lives on cuda:0 if cuda is available.
75
+ nthreads (int): Number of threads.
76
+ batch_size (int): Bert score processing batch size,
77
+ at least one of `model_type` or `lang`. `lang` needs to be
78
+ specified when `rescale_with_baseline` is True.
79
+ rescale_with_baseline (bool): Rescale bertscore with pre-computed baseline.
80
+ baseline_path (str): Customized baseline file.
81
+ use_fast_tokenizer (bool): `use_fast` parameter passed to HF tokenizer. New in version 0.3.10.
82
+
83
+ Returns:
84
+ precision: Precision.
85
+ recall: Recall.
86
+ f1: F1 score.
87
+ hashcode: Hashcode of the library.
88
+
89
+ Examples:
90
+
91
+ >>> predictions = ["hello there", "general kenobi"]
92
+ >>> references = ["hello there", "general kenobi"]
93
+ >>> bertscore = evaluate.load("bertscore")
94
+ >>> results = bertscore.compute(predictions=predictions, references=references, lang="en")
95
+ >>> print([round(v, 2) for v in results["f1"]])
96
+ [1.0, 1.0]
97
+ """
98
+
99
+
100
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
101
+ class BERTScore(evaluate.Metric):
102
+ def _info(self):
103
+ return evaluate.MetricInfo(
104
+ description=_DESCRIPTION,
105
+ citation=_CITATION,
106
+ homepage="https://github.com/Tiiiger/bert_score",
107
+ inputs_description=_KWARGS_DESCRIPTION,
108
+ features=[
109
+ datasets.Features(
110
+ {
111
+ "predictions": datasets.Value("string", id="sequence"),
112
+ "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
113
+ }
114
+ ),
115
+ datasets.Features(
116
+ {
117
+ "predictions": datasets.Value("string", id="sequence"),
118
+ "references": datasets.Value("string", id="sequence"),
119
+ }
120
+ ),
121
+ ],
122
+ codebase_urls=["https://github.com/Tiiiger/bert_score"],
123
+ reference_urls=[
124
+ "https://github.com/Tiiiger/bert_score",
125
+ "https://arxiv.org/abs/1904.09675",
126
+ ],
127
+ )
128
+
129
+ def _compute(
130
+ self,
131
+ predictions,
132
+ references,
133
+ lang=None,
134
+ model_type=None,
135
+ num_layers=None,
136
+ verbose=False,
137
+ idf=False,
138
+ device=None,
139
+ batch_size=64,
140
+ nthreads=4,
141
+ all_layers=False,
142
+ rescale_with_baseline=False,
143
+ baseline_path=None,
144
+ use_fast_tokenizer=False,
145
+ ):
146
+
147
+ if isinstance(references[0], str):
148
+ references = [[ref] for ref in references]
149
+
150
+ if idf:
151
+ idf_sents = [r for ref in references for r in ref]
152
+ else:
153
+ idf_sents = None
154
+
155
+ get_hash = bert_score.utils.get_hash
156
+ scorer = bert_score.BERTScorer
157
+
158
+ if version.parse(bert_score.__version__) >= version.parse("0.3.10"):
159
+ get_hash = functools.partial(get_hash, use_fast_tokenizer=use_fast_tokenizer)
160
+ scorer = functools.partial(scorer, use_fast_tokenizer=use_fast_tokenizer)
161
+ elif use_fast_tokenizer:
162
+ raise ImportWarning(
163
+ "To use a fast tokenizer, the module `bert-score>=0.3.10` is required, and the current version of "
164
+ "`bert-score` doesn't match this condition.\n"
165
+ 'You can install it with `pip install "bert-score>=0.3.10"`.'
166
+ )
167
+
168
+ if model_type is None:
169
+ if lang is None:
170
+ raise ValueError(
171
+ "Either 'lang' (e.g. 'en') or 'model_type' (e.g. 'microsoft/deberta-xlarge-mnli')"
172
+ " must be specified"
173
+ )
174
+ model_type = bert_score.utils.lang2model[lang.lower()]
175
+
176
+ if num_layers is None:
177
+ num_layers = bert_score.utils.model2layers[model_type]
178
+
179
+ hashcode = get_hash(
180
+ model=model_type,
181
+ num_layers=num_layers,
182
+ idf=idf,
183
+ rescale_with_baseline=rescale_with_baseline,
184
+ use_custom_baseline=baseline_path is not None,
185
+ )
186
+
187
+ with filter_logging_context():
188
+ if not hasattr(self, "cached_bertscorer") or self.cached_bertscorer.hash != hashcode:
189
+ self.cached_bertscorer = scorer(
190
+ model_type=model_type,
191
+ num_layers=num_layers,
192
+ batch_size=batch_size,
193
+ nthreads=nthreads,
194
+ all_layers=all_layers,
195
+ idf=idf,
196
+ idf_sents=idf_sents,
197
+ device=device,
198
+ lang=lang,
199
+ rescale_with_baseline=rescale_with_baseline,
200
+ baseline_path=baseline_path,
201
+ )
202
+
203
+ (P, R, F) = self.cached_bertscorer.score(
204
+ cands=predictions,
205
+ refs=references,
206
+ verbose=verbose,
207
+ batch_size=batch_size,
208
+ )
209
+ output_dict = {
210
+ "precision": P.tolist(),
211
+ "recall": R.tolist(),
212
+ "f1": F.tolist(),
213
+ "hashcode": hashcode,
214
+ }
215
+ return output_dict
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ git+https://github.com/huggingface/evaluate@6abb0d53b82b1e5efea5d683b91d7990a653c78d
2
+ bert_score