Datasets:
File size: 3,564 Bytes
7a35ee7 5fa5fcc 081b833 5fa5fcc 081b833 7a35ee7 081b833 b4a96f3 081b833 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
---
license: bsd-2-clause
dataset_info:
features:
- name: device_id
dtype: string
- name: x
sequence: float64
- name: 'y'
dtype: int64
splits:
- name: train
num_bytes: 53656756
num_examples: 107553
download_size: 64693258
dataset_size: 53656756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- tabular-classification
size_categories:
- 100K<n<1M
---
# Dataset Card for SYNTHETIC
The SYNTHETIC dataset is a part of the [LEAF](https://leaf.cmu.edu/) benchmark.
This version corresponds to the dataset generated with default parameters that give a dataset with:
* input (`x`) of length 60;
* 5 unique labels (`y`)
* 1000 unique devices (`device_id`).
## Dataset Details
### Dataset Description
- **Curated by:** [LEAF](https://leaf.cmu.edu/)
- **License:** BSD 2-Clause License
## Uses
This dataset is intended to be used in Federated Learning settings.
### Direct Use
We recommend using [Flower Dataset](https://flower.ai/docs/datasets/) (flwr-datasets) and [Flower](https://flower.ai/docs/framework/) (flwr).
To partition the dataset, do the following.
1. Install the package.
```bash
pip install flwr-datasets
```
2. Use the HF Dataset under the hood in Flower Datasets.
```python
from flwr_datasets import FederatedDataset
from flwr_datasets.partitioner import NaturalIdPartitioner
fds = FederatedDataset(
dataset="flwrlabs/synthetic",
partitioners={"train": NaturalIdPartitioner(partition_by="device_id")}
)
partition = fds.load_partition(partition_id=0)
```
## Dataset Structure
The whole dataset is kept in the train split. If you want to leave out some part of the dataset for centralized evaluation, use Resplitter. (The full example is coming soon here).
## Citation
When working on the LEAF benchmark, please cite the original paper. If you're using this dataset with Flower Datasets, you can cite Flower.
**BibTeX:**
```
@article{DBLP:journals/corr/abs-1812-01097,
author = {Sebastian Caldas and
Peter Wu and
Tian Li and
Jakub Kone{\v{c}}n{\'y} and
H. Brendan McMahan and
Virginia Smith and
Ameet Talwalkar},
title = {{LEAF:} {A} Benchmark for Federated Settings},
journal = {CoRR},
volume = {abs/1812.01097},
year = {2018},
url = {http://arxiv.org/abs/1812.01097},
eprinttype = {arXiv},
eprint = {1812.01097},
timestamp = {Wed, 23 Dec 2020 09:35:18 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1812-01097.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@article{DBLP:journals/corr/abs-2007-14390,
author = {Daniel J. Beutel and
Taner Topal and
Akhil Mathur and
Xinchi Qiu and
Titouan Parcollet and
Nicholas D. Lane},
title = {Flower: {A} Friendly Federated Learning Research Framework},
journal = {CoRR},
volume = {abs/2007.14390},
year = {2020},
url = {https://arxiv.org/abs/2007.14390},
eprinttype = {arXiv},
eprint = {2007.14390},
timestamp = {Mon, 03 Aug 2020 14:32:13 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-14390.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Dataset Card Contact
In case of any doubts, please contact [Flower Labs](https://flower.ai/).
|