Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,7 @@ Loading the raw model in code:
|
|
19 |
```python
|
20 |
from peft import PeftModel, PeftConfig
|
21 |
from transformers import AutoModelForTokenClassification
|
|
|
22 |
|
23 |
id2label = {
|
24 |
0: "O",
|
@@ -76,8 +77,10 @@ Results for commas on the wikitext validation set:
|
|
76 |
|----------|-----------|--------|------|---------|
|
77 |
| baseline* | 0.79 | 0.72 | 0.75 | 10079 |
|
78 |
| ours | 0.84 | 0.84 | 0.84 | 10079 |
|
|
|
79 |
*baseline is the [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large)
|
80 |
-
model evaluated on commas.
|
|
|
81 |
|
82 |
## Training procedure
|
83 |
|
|
|
19 |
```python
|
20 |
from peft import PeftModel, PeftConfig
|
21 |
from transformers import AutoModelForTokenClassification
|
22 |
+
import torch
|
23 |
|
24 |
id2label = {
|
25 |
0: "O",
|
|
|
77 |
|----------|-----------|--------|------|---------|
|
78 |
| baseline* | 0.79 | 0.72 | 0.75 | 10079 |
|
79 |
| ours | 0.84 | 0.84 | 0.84 | 10079 |
|
80 |
+
|
81 |
*baseline is the [oliverguhr/fullstop-punctuation-multilang-large](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large)
|
82 |
+
model evaluated on commas, out of domain on wikitext. In-domain, authors report F1 of 0.819 for English political speeches, however,
|
83 |
+
it seems that wikipedia text could be more challenging for comma restoration.
|
84 |
|
85 |
## Training procedure
|
86 |
|