# Accurate parts visualization for explaining CNN reasoning via semantic segmentation

Published in The British Machine Vision Conference, 2020

Recommended citation: Harada, R., Tejero-de-Pablos, A., & Harada, T. (2020). Accurate Parts Visualization for Explaining CNN Reasoning via Semantic Segmentation. In BMVC.

Nowadays, neural networks are often used for image classification, but the structure of their decisions is difficult to understand because of their “black-box” nature. Different visualization techniques have been proposed to provide additional information on the reason of the classification results. Existing methods provide quantitative explanations by calculating heatmaps and interpretable components in the image. While the latter provides semantics on the image parts that contribute for the classification, the component areas are blurry due to the use of linear layers, which do not consider surrounding information. This makes hard to point out the specific reason for the classification and to evaluate quantitatively. In this paper, we introduce a novel method for explaining classification in neural networks, the Parts Detection Module. Unlike previous methods, ours is capable of determining the accurate position of the interpretable components in the image by performing upsampling and convolution stepwise, similarly to semantic segmentation. In addition to providing quantitative visual explanations, we also proposed a method to verify the validity of the quantitative explanations themselves. The experimental results prove the effectivity of our explanations.

@inproceedings{harada2020accurate,