Datasets:

Modalities:
Image
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 8,824 Bytes
f706d1a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
265760d
 
 
 
 
f706d1a
ce669ff
 
3dfdf8f
ce669ff
 
 
3dfdf8f
ce669ff
ef577d5
 
 
 
 
 
 
 
ce669ff
 
 
 
 
 
 
d1457ca
ce669ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc14932
ce669ff
 
 
 
 
 
 
3dfdf8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce669ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
265760d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: center
    dtype: int64
  - name: label
    dtype:
      class_label:
        names:
          '0': '0'
          '1': '1'
          '2': '2'
          '3': '3'
          '4': '4'
          '5': '5'
          '6': '6'
          '7': '7'
  splits:
  - name: train
    num_bytes: 100322881.119
    num_examples: 18597
  - name: test
    num_bytes: 25524081.6
    num_examples: 4650
  download_size: 143843380
  dataset_size: 125846962.71900001
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
license: cc-by-nc-4.0
task_categories:
- image-classification
size_categories:
- 10K<n<100K
---
# Dataset Card for Fed-ISIC-2019

Federated version of ISIC-2019 Datasets ([ISIC2019 challenge](https://challenge.isic-archive.com/landing/2019/) and the [HAM1000 database](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T)). This implementation is derived based on the [FLamby](https://github.com/owkin/FLamby/blob/main/flamby/datasets/fed_isic2019/README.md) implementation.

## Dataset Details

The dataset contains 23,247 images of skin lesions divided among 6 clients representing different data centers. The number of samples for training/testing per data center is displayed in the table below:

| center_id |  Train  |  Test  |
|:---------:|:-------:|:------:|
|     0     |  9930   |  2483  |
|     1     |  3163   |   791  |
|     2     |  2691   |   672  |
|     3     |  1807   |   452  |
|     4     |   655   |   164  |
|     5     |   351   |   88   |

### Dataset Sources

- **ISIC 2019 Challange website:** https://challenge.isic-archive.com/landing/2019/
- **HAM1000 database website:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T
- **FLamby:** https://github.com/owkin/FLamby/tree/main
- **FLamby Fed-ISIC-2019 README:** https://github.com/owkin/FLamby/blob/main/flamby/datasets/fed_isic2019/README.md
- **Fed-ISIC-2019 docs:** https://owkin.github.io/FLamby/fed_isic.html

## Use in FL

In order to prepare the dataset for the FL settings, we recommend using [Flower Dataset](https://flower.ai/docs/datasets/) (flwr-datasets) for the dataset download and partitioning and [Flower](https://flower.ai/docs/framework/) (flwr) for conducting FL experiments.

To partition the dataset, do the following. 
1. Install the package.
```bash
pip install flwr-datasets[vision]
```

2. Use the HF Dataset under the hood in Flower Datasets.
```python
from flwr_datasets import FederatedDataset
from flwr_datasets.partitioner import NaturalIdPartitioner

fds = FederatedDataset(
    dataset="flwrlabs/fed-isic2019",
    partitioners={"train": NaturalIdPartitioner(partition_by="center"),
                  "test": NaturalIdPartitioner(partition_by="center")}
)
partition_train = fds.load_partition(partition_id=0, split="train")
partition_test = fds.load_partition(partition_id=0, split="test")
```

```
# Note: to keep the same results as in FLamby, please apply the following transformation
import albumentations
import random
import numpy as np
import torch


# Train dataset transformations
def apply_train_transforms(image_input):
    print(image_input)
    size = 200
    train_transforms = albumentations.Compose(
        [
            albumentations.RandomScale(0.07),
            albumentations.Rotate(50),
            albumentations.RandomBrightnessContrast(0.15, 0.1),
            albumentations.Flip(p=0.5),
            albumentations.Affine(shear=0.1),
            albumentations.RandomCrop(size, size),
            albumentations.CoarseDropout(random.randint(1, 8), 16, 16),
            albumentations.Normalize(always_apply=True),
        ]
    )
    images = []
    for image in image_input["image"]:
        augmented = train_transforms(image=np.array(image))["image"]
        transposed = np.transpose(augmented, (2, 0, 1)).astype(np.float32)
        images.append(torch.tensor(transposed, dtype=torch.float32))
    image_input["image"] = images
    return image_input


partition_train = partition_train.with_transform(apply_train_transforms,
                                                 columns="image")

# Test dataset transformations
def apply_test_transforms(image_input):
    print(image_input)
    size = 200
    test_transforms = albumentations.Compose(
        [
            albumentations.CenterCrop(size, size),
            albumentations.Normalize(always_apply=True),
        ]
    )
    images = []
    for image in image_input["image"]:
        augmented = test_transforms(image=np.array(image))["image"]
        transposed = np.transpose(augmented, (2, 0, 1)).astype(np.float32)
        images.append(torch.tensor(transposed, dtype=torch.float32))
    image_input["image"] = images
    return image_input


partition_test = partition_test.with_transform(apply_test_transforms,
                                                 columns="image")
```

## Dataset Structure

### Data Instances
The first instance of the train split is presented below:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=224x224>,
 'center': 0,
 'label': 2
}
```
### Data Split

```
DatasetDict({
    train: Dataset({
        features: ['image', 'center', 'label'],
        num_rows: 18597
    })
    test: Dataset({
        features: ['image', 'center', 'label'],
        num_rows: 4650
    })
})
```

## Citation

When working with the Fed-ISIC-2019 dataset, please cite the original paper. 
If you're using this dataset with Flower Datasets and Flower, cite Flower.

**BibTeX:**

FLamby:
```
@inproceedings{NEURIPS2022_232eee8e,
 author = {Ogier du Terrail, Jean and Ayed, Samy-Safwan and Cyffers, Edwige and Grimberg, Felix and He, Chaoyang and Loeb, Regis and Mangold, Paul and Marchand, Tanguy and Marfoq, Othmane and Mushtaq, Erum and Muzellec, Boris and Philippenko, Constantin and Silva, Santiago and Tele\'{n}czuk, Maria and Albarqouni, Shadi and Avestimehr, Salman and Bellet, Aur\'{e}lien and Dieuleveut, Aymeric and Jaggi, Martin and Karimireddy, Sai Praneeth and Lorenzi, Marco and Neglia, Giovanni and Tommasi, Marc and Andreux, Mathieu},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
 pages = {5315--5334},
 publisher = {Curran Associates, Inc.},
 title = {FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings},
 url = {https://proceedings.neurips.cc/paper_files/paper/2022/file/232eee8ef411a0a316efa298d7be3c2b-Paper-Datasets_and_Benchmarks.pdf},
 volume = {35},
 year = {2022}
}

````

Flower:

```
@article{DBLP:journals/corr/abs-2007-14390,
  author       = {Daniel J. Beutel and
                  Taner Topal and
                  Akhil Mathur and
                  Xinchi Qiu and
                  Titouan Parcollet and
                  Nicholas D. Lane},
  title        = {Flower: {A} Friendly Federated Learning Research Framework},
  journal      = {CoRR},
  volume       = {abs/2007.14390},
  year         = {2020},
  url          = {https://arxiv.org/abs/2007.14390},
  eprinttype    = {arXiv},
  eprint       = {2007.14390},
  timestamp    = {Mon, 03 Aug 2020 14:32:13 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2007-14390.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}
```

## Other References
The "ISIC 2019: Training" is the aggregate of the following datasets:

BCN_20000 Dataset: (c) Department of Dermatology, Hospital Clínic de Barcelona

HAM10000 Dataset: (c) by ViDIR Group, Department of Dermatology, Medical University of Vienna; HAM10000 dataset

MSK Dataset: (c) Anonymous; challenge 2017; challenge 2018

See below the full citations:

[1] Tschandl P., Rosendahl C. & Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 doi.10.1038/sdata.2018.161 (2018).

[2] Noel C. F. Codella, David Gutman, M. Emre Celebi, Brian Helba, Michael A. Marchetti, Stephen W. Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, Allan Halpern: “Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC)”, 2017; arXiv:1710.05006.

[3] Marc Combalia, Noel C. F. Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Allan C. Halpern, Susana Puig, Josep Malvehy: “BCN20000: Dermoscopic Lesions in the Wild”, 2019; arXiv:1908.02288.

## Dataset Card Contact

If you have any questions about the dataset preprocessing and preparation, please contact [Flower Labs](https://flower.ai/).