Datasets:
yoshitomo-matsubara
commited on
Commit
•
49cc4ce
1
Parent(s):
b065544
Update paper link
Browse files
README.md
CHANGED
@@ -58,12 +58,12 @@ license: cdla-permissive-2.0
|
|
58 |
## Dataset Description
|
59 |
|
60 |
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
|
61 |
-
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://
|
62 |
- **Point of Contact:** [Yoshitomo Matsubara]([email protected])
|
63 |
|
64 |
### Dataset Summary
|
65 |
|
66 |
-
***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): **Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages
|
67 |
This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)).
|
68 |
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
|
69 |
|
@@ -142,11 +142,12 @@ The source of Xtr-WikiQA dataset is [WikiQA](https://msropendata.com/datasets/21
|
|
142 |
|
143 |
### Citation Information
|
144 |
|
145 |
-
```
|
146 |
-
@
|
147 |
-
title={Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages},
|
148 |
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
|
149 |
-
|
|
|
150 |
year={2023}
|
151 |
}
|
152 |
```
|
|
|
58 |
## Dataset Description
|
59 |
|
60 |
- **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages)
|
61 |
+
- **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://aclanthology.org/2023.findings-acl.885/)
|
62 |
- **Point of Contact:** [Yoshitomo Matsubara]([email protected])
|
63 |
|
64 |
### Dataset Summary
|
65 |
|
66 |
+
***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): [**Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**](https://aclanthology.org/2023.findings-acl.885/).
|
67 |
This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)).
|
68 |
For translations, we used [Amazon Translate](https://aws.amazon.com/translate/).
|
69 |
|
|
|
142 |
|
143 |
### Citation Information
|
144 |
|
145 |
+
```bibtex
|
146 |
+
@inproceedings{gupta2023cross-lingual,
|
147 |
+
title={{Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages}},
|
148 |
author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro},
|
149 |
+
booktitle={Findings of the Association for Computational Linguistics: ACL 2023},
|
150 |
+
pages={14078--14092},
|
151 |
year={2023}
|
152 |
}
|
153 |
```
|