-
The main file of interest here is torch_eva.py where all the evaluation will be carried out
-
For ease of usage please put your dataset and the trained model inside the PyTorch-Evaluation-Tool folder
-
If the above step is not possible due to some reasons make sure to provide appropriate path to dataset and saved model at appropriate places when prompted by the comment section of the code
-
There are 2 places in the code where user input is needed i.e. TEST_DATA_PATH and model. Those sections are clearly marked in the comment section of the code
-
While training your model make sure to save the model using torch.jit.save(), that way you won't need the model architecture while evaluating your code.
-
In case you have already saved your model using some other method, space has been provided to provide the user's model architecture
-
This tool doesn't generate any loss vs epoch graph or accuray vs epoch graph as those details are recorded while training the model and can't be retrieved from saved model.
-
If everything works fine you should get the following results:
a) Test accuracy will be printed
b) The confusion matrix will be printed
c) A plot of the confusion matrix
d) Another plot of the confusion matrix in a heatmap
e) A Plot of the ROC curve
f) Classification report will be printed
g) An interactive table of classification report will be generated in Bokeh Table -
Tensorboard can be also run if you already have the saved logs
-
To run the tensorboard use the following steps:
a) Go to the folder containing the logs using command prompt
b) Run the following command:
tensorboard --logdir logs
c) You will get a host similar to the one below:
http://localhost:6006/
d) Paste it in your browser and run to view the tensorboard.
e) A sample logs file named as logs is provided in Project1_Final folder for users to run and see how Tensorboard works -
This tool is for evaluation of simple pytorch models and may not be able to handle specialized evaluation methods for which custom changes are needed
-
As a simple use case project1_Final already has a dummy dataset, and a dummy saved model so that users can see how the model is working before evaluation on their own dataset and saved model
forked from biditdas18/PyTorch-Evaluation-Tool
-
Notifications
You must be signed in to change notification settings - Fork 0
hula-ai/PyTorch-Evaluation-Tool
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
A Tool for Evaluation of PyTorch saved Models
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published
Languages
- Python 67.2%
- HTML 32.8%