project for pytorch implementation example of image classification
- python >= 3.7
- pytorch >= 1.0
- pyyaml
- scikit-learn
- wandb
- pre-commit (for pre-commit formatting, type check and testing)
- hiddenlayer
- graphviz
- python wrapper for graphviz
Please run poetry install
to install the necessary packages.
You can also setup the environment using docker and docker-compose.
Flowers Recognition Dataset Download the dataset from HERE.
.
├── docs/
├── LICENSE
├── README.md
├── dataset/
│ └── flowers/
├── pyproject.toml
├── .gitignore
├── .gitattributes
├── .pre-commit-config.yaml
├── poetry.lock
├── docker-compose.yaml
├── Dockerfile
├── tests/
└── src/
├── csv
├── libs/
├── utils
├── notebook/
├── result/
├── scripts/
│ └── experiment.sh
├── train.py
└── evaluate.py
-
configuration class using
dataclasses.dataclass
(libs/config.py
)- type check.
- detection of unnecessary / extra parameters in a specified configuration.
dataclass
is an immutable object, which prevents the setting from being changed by mistake.
-
automatically generating configuration files (
utils/make_configs.py
)- e.g.) run this command
python utils/make_configs.py --model resnet18 resnet30 resnet50 --learning_rate 0.001 0.0001 --dataset_name flower
then you can get all of the combinations with
model
andlearning_rate
(total 6 config files), while the other parameters are set by default as described inlibs/config.py
.You can choose which data you use in experiment by specifying
dataset_name
. The lists of data for training, validation and testing are saved as csv files. You can see the paths to them inlibs/dataset_csv.py
and get them corresponding todataset_name
. If you want to use another dataset, please add csv files and the paths inDATASET_CSVS
inlibs/dataset_csv.py
.You can also set tuple object parameters in configs like the below.
python utils/make_configs.py --model resnet18 --topk 1 3 --topk 1 3 5
By running this, you can get two configurations, in one of which topk parameter is (1, 3) and in the other topk parameter is (1, 3, 5).
-
running all the experiments by running shell scripts (
scripts/experiment.sh
) -
support type annotation (
typing
) -
code formatting with
black
,isort
andflake8
-
visualize model for debug using
hiddenlayer
(src/utils/visualize_model.py
)
Please see scripts/experiment.sh
for the detail.
You can set configurations and run all the experiments by the below command.
sh scripts/experiment.sh
If you use local environment, then run
poetry install
If you use docker, then run
docker-compose up -d --build
docker-compose run mlserver bash
python train.py ./result/xxxx/config.yaml
python evaluate.py ./result/xxxx/config.yaml validation
python evaluate.py ./result/xxxx/config.yaml test
python utils/visualize_model.py MODEL_NAME
- black
- flake8
- isort
- pytorch implementation of image classification
- configuration class using
dataclasses.dataclass
- auto generation of config yaml files
- shell script to run all the experiment
- support
typing
(type annotation) - test code (run testing with pre-commit check)
-
mypy
(pre-commit check) - formatting (pre-commit
isort
,black
andflake8
) - calculate cyclomatic complexity / expression complexity / cognitive complexity (
flake8
extension) - CI for testing using GitHub Actions
- visualization of models
- add Dockerfile and docker-compose.yaml
This repository is released under the MIT License