This Repo is meant as a collaboration platform during the DL2020 group assignment. The goal is to experiment with different regularization techniques and achieve good results on a publicly available dataset. Imagewoof was chosen for this purpose. The leaderboard entries at the time of this writing are 76.61% (5 Epochs), 86.27% (20 Epochs) and 87.83% (80 Epchs) for the 128px Category.
- architectures
Contains the models that were used to test different regularization techniques (-> see Models) - best
Contains the best restult that we have achieved with different regularization techniques - experiments
Contains experiments performed to achieve the restults in the form of ipynb-files - misc
Contains helpers for statistical analysis, plotting etc - results
Contains the generated CSV-Filed obtained by the experiments
All files are named after the following schema: x_m_r-p[_r-p]_i
where x is the number of epochs trained with this configuration, m is the underlying model that was tested and r the regularization technique/s that was/were used. Parameters are further separated by the - Sign. If multiple regularizations were used on a dataset, they are listed in alphabetical order.
Example: The file 20_o_dropout-20_3.csv
would mean that 20 Epochs were trained on the optimized base model. The tested regularization was a Dropout of 20 % and this is the 3 rd iteration.
We used the following models to evaluate different regularization techniques:
- (B)ase Model - Basic XResNet50 implementation from fastai
- (O)ptimized Base Model - XResnet50 with tuned parameters and regularizations
- (L)eaderboard Contestant - State-of-the-art model by Ayasyrev for comparison from the imagenette-leaderboard
Model testing to avoid overfitting.
Model testing blurppol in unified code base together with savemodelcallback
Tests with the epsilon parameter
Results with the final notebook and imagenette dataset (for comparision)
Results with the FINAL.ipynb
experiemts with different transformation techniques, done before the unfied code base
Experiments with LR exitmation from fastai
Folder containing results of the optimized base model, our first unifed codebase
Experiments with SaveModelCallback (restore weights, when results are worser then the spoch before)
- The leaderboard of the best results is updated on each push via a Github-Action. To see it, click the ✔️ - Symbol near the last commit description. Then "Details" and then "Get list of best results".
- For a graphical version, check out this notebook for local recalculation for the latest results, rerun it in Colab.
Open notebooks in Colab: