Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,98 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
## Training procedure
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
### Framework versions
|
7 |
|
8 |
|
9 |
- PEFT 0.5.0
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: mit
|
4 |
+
datasets:
|
5 |
+
- wikitext
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
metrics:
|
9 |
+
- f1
|
10 |
---
|
11 |
+
RoBERTa large fine-tuned using [LoRa](https://arxiv.org/pdf/2106.09685.pdf) for predicting comma placement in text. It expects input with commas removed
|
12 |
+
and classifies each token for whether it should have a comma inserted after it or not.
|
13 |
+
|
14 |
+
As a PEFT model, it does not seem to work well with huggingface pipelines, at least not at the time of writing.
|
15 |
+
|
16 |
+
Examples of usage and a wrapper class for text-to-text comma fixing can be seen in the [demo](https://huggingface.co/spaces/klasocki/comma-fixer).
|
17 |
+
|
18 |
+
Loading the raw model in code:
|
19 |
+
```python
|
20 |
+
from peft import PeftModel, PeftConfig
|
21 |
+
from transformers import AutoModelForTokenClassification
|
22 |
+
|
23 |
+
id2label = {
|
24 |
+
0: "O",
|
25 |
+
1: "B-COMMA"
|
26 |
+
}
|
27 |
+
label2id = {
|
28 |
+
"O": 0,
|
29 |
+
"B-COMMA": 1
|
30 |
+
}
|
31 |
+
|
32 |
+
peft_model_id = 'klasocki/roberta-large-lora-ner-comma-fixer'
|
33 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
34 |
+
inference_model = AutoModelForTokenClassification.from_pretrained(
|
35 |
+
config.base_model_name_or_path, num_labels=len(id2label), id2label=id2label, label2id=label2id
|
36 |
+
)
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
38 |
+
model = PeftModel.from_pretrained(inference_model, peft_model_id)
|
39 |
+
|
40 |
+
text = "This text should have commas here here and there however it does not."
|
41 |
+
inputs = tokenizer(text, return_tensors="pt")
|
42 |
+
|
43 |
+
with torch.no_grad():
|
44 |
+
logits = model(**inputs).logits
|
45 |
+
|
46 |
+
tokens = inputs.tokens()
|
47 |
+
predictions = torch.argmax(logits, dim=2)
|
48 |
+
|
49 |
+
for token, prediction in zip(tokens, predictions[0].numpy()):
|
50 |
+
print((token, model.config.id2label[prediction]))
|
51 |
+
|
52 |
+
### OUTPUT:
|
53 |
+
('<s>', 'O')
|
54 |
+
('This', 'O')
|
55 |
+
('Ġtext', 'O')
|
56 |
+
('Ġshould', 'O')
|
57 |
+
('Ġhave', 'O')
|
58 |
+
('Ġcomm', 'O')
|
59 |
+
('as', 'O')
|
60 |
+
('Ġhere', 'B-COMMA')
|
61 |
+
('Ġhere', 'O')
|
62 |
+
('Ġand', 'O')
|
63 |
+
('Ġthere', 'B-COMMA')
|
64 |
+
('Ġhowever', 'O')
|
65 |
+
('Ġit', 'O')
|
66 |
+
('Ġdoes', 'O')
|
67 |
+
('Ġnot', 'O')
|
68 |
+
('.', 'O')
|
69 |
+
('</s>', 'O')
|
70 |
+
```
|
71 |
+
|
72 |
+
## Evaluation results
|
73 |
+
Results for commas on the wikitext validation set:
|
74 |
+
|
75 |
+
| Model | precision | recall | F1 | support |
|
76 |
+
|----------|-----------|--------|------|---------|
|
77 |
+
| baseline* | 0.79 | 0.72 | 0.75 | 10079 |
|
78 |
+
| ours | 0.84 | 0.84 | 0.84 | 10079 |
|
79 |
+
*baseline is the [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large)
|
80 |
+
model evaluated on commas.
|
81 |
+
|
82 |
## Training procedure
|
83 |
|
84 |
+
To compare with the baseline, we fine-tune the same model, RoBERTa large, on the wikitext English dataset.
|
85 |
+
We use a similar approach, where we treat comma-fixing as a NER problem, and for each token predict whether a comma should be inserted after it.
|
86 |
+
|
87 |
+
The biggest advantage of this approach is that it preserves the input structure and only focuses on commas, ensuring that nothing else will be changed and that the model will not have to learn repeating the input back in case no commas should be inserted.
|
88 |
+
|
89 |
+
We use LoRa to reduce training time and costs, and synthesize a training dataset from wikitext.
|
90 |
+
In the end the model seems to converge after only about 15000 training examples, so a small subset of wikitext is more than enough.
|
91 |
+
Adding more languages and domains can be explored in the future.
|
92 |
+
|
93 |
### Framework versions
|
94 |
|
95 |
|
96 |
- PEFT 0.5.0
|
97 |
+
- Transformers 4.31.0
|
98 |
+
- Torch 2.0.1
|