I've always been interested in going from black box to interpretable models. As a small step in this direction, I explored class activation maps:
- Explored the use of class activation maps (CAMs) to explain predictions of ConvNets
- Learned and analyzed the advantages and limitations of the following approaches: CAM, Gradient-Weighted CAM (GradCAM), and GradCAM++
- Used PyTorch implementations to see CAMs in action using pretrained ResNet50 model on a subset of representative ImageNet classes
Here are some useful links:
Read the article: How to Explain ConvNet Predictions Using Class Activation Maps