hkivancoral's picture
End of training
d528708
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_small_adamax_001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.46511627906976744

hushem_1x_deit_small_adamax_001_fold3

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 3.7699
  • Accuracy: 0.4651

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.4218 0.2558
1.7221 2.0 12 1.4061 0.3953
1.7221 3.0 18 1.4801 0.3256
1.2972 4.0 24 1.5453 0.3023
1.2115 5.0 30 1.2993 0.3953
1.2115 6.0 36 1.4486 0.3721
1.1196 7.0 42 1.4881 0.3721
1.1196 8.0 48 1.2031 0.4419
1.0394 9.0 54 1.1825 0.4651
0.9076 10.0 60 1.3831 0.3953
0.9076 11.0 66 1.5606 0.3953
0.8351 12.0 72 1.6879 0.3721
0.8351 13.0 78 1.5744 0.5581
0.7325 14.0 84 2.1220 0.5116
0.5767 15.0 90 2.2458 0.4884
0.5767 16.0 96 2.4745 0.3953
0.487 17.0 102 2.9255 0.3953
0.487 18.0 108 2.8169 0.4186
0.265 19.0 114 2.9600 0.4419
0.2739 20.0 120 3.0131 0.3953
0.2739 21.0 126 3.2413 0.4186
0.1684 22.0 132 4.9920 0.3953
0.1684 23.0 138 3.1514 0.5116
0.3265 24.0 144 4.1598 0.3953
0.2652 25.0 150 3.3248 0.4651
0.2652 26.0 156 3.1898 0.4884
0.1992 27.0 162 3.7937 0.3953
0.1992 28.0 168 3.9838 0.4884
0.1826 29.0 174 3.5764 0.3721
0.124 30.0 180 4.1231 0.4419
0.124 31.0 186 4.1455 0.4186
0.1353 32.0 192 3.9925 0.4186
0.1353 33.0 198 3.7016 0.5581
0.0743 34.0 204 3.7997 0.5349
0.0362 35.0 210 3.6073 0.4884
0.0362 36.0 216 3.6198 0.4651
0.0082 37.0 222 3.6509 0.4651
0.0082 38.0 228 3.7081 0.4651
0.003 39.0 234 3.7432 0.4651
0.002 40.0 240 3.7616 0.4651
0.002 41.0 246 3.7690 0.4651
0.0018 42.0 252 3.7699 0.4651
0.0018 43.0 258 3.7699 0.4651
0.0016 44.0 264 3.7699 0.4651
0.0017 45.0 270 3.7699 0.4651
0.0017 46.0 276 3.7699 0.4651
0.0017 47.0 282 3.7699 0.4651
0.0017 48.0 288 3.7699 0.4651
0.0018 49.0 294 3.7699 0.4651
0.0017 50.0 300 3.7699 0.4651

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1