Skip to content

constantin-huetterer/DL2020

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DL2020

This Repo is meant as a collaboration platform during the DL2020 group assignment. The goal is to experiment with different regularization techniques and achieve good results on a publicly available dataset. Imagewoof was chosen for this purpose. The leaderboard entries at the time of this writing are 76.61% (5 Epochs), 86.27% (20 Epochs) and 87.83% (80 Epchs) for the 128px Category.

Folder Structure

  • architectures
    Contains the models that were used to test different regularization techniques (-> see Models)
  • best
    Contains the best restult that we have achieved with different regularization techniques
  • experiments
    Contains experiments performed to achieve the restults in the form of ipynb-files
  • misc
    Contains helpers for statistical analysis, plotting etc
  • results
    Contains the generated CSV-Filed obtained by the experiments

File Name Schema

All files are named after the following schema: x_m_r-p[_r-p]_i where x is the number of epochs trained with this configuration, m is the underlying model that was tested and r the regularization technique/s that was/were used. Parameters are further separated by the - Sign. If multiple regularizations were used on a dataset, they are listed in alphabetical order.

Example: The file 20_o_dropout-20_3.csv would mean that 20 Epochs were trained on the optimized base model. The tested regularization was a Dropout of 20 % and this is the 3 rd iteration.

Models

We used the following models to evaluate different regularization techniques:

  1. (B)ase Model - Basic XResNet50 implementation from fastai
  2. (O)ptimized Base Model - XResnet50 with tuned parameters and regularizations
  3. (L)eaderboard Contestant - State-of-the-art model by Ayasyrev for comparison from the imagenette-leaderboard

Regularization Techniques

Optimizer and Early Stopping

Model testing to avoid overfitting.

architectures

blurpool and savemodelcallback

Model testing blurppol in unified code base together with savemodelcallback

dropout

epsilontest

Tests with the epsilon parameter

final_imagenette

Results with the final notebook and imagenette dataset (for comparision)

final_results

Results with the FINAL.ipynb

item_transformation

experiemts with different transformation techniques, done before the unfied code base

learning_rate

lr_estimate

Experiments with LR exitmation from fastai

results

Folder containing results of the optimized base model, our first unifed codebase

savemodelcallback

Experiments with SaveModelCallback (restore weights, when results are worser then the spoch before)

tested_params

tests_before_unified_base_model

variance

Local Leaderboard

Open notebooks in Colab:

About

This Repo is meant for colaboration during the DL2020 assignment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published