TY - INPR U1 - Preprint A1 - Padmanabhan, Deepan Chakravarthi A1 - Plöger, Paul G. A1 - Arriaga, Octavio A1 - Valdenegro-Toro, Matias T1 - Sanity Checks for Saliency Methods Explaining Object Detectors N2 - Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems. KW - Computer Science - Learning KW - Computer Science - Computer Vision and Pattern Recognition Y1 - 2023 AX - 2306.02424v1 SP - 18 S1 - 18 PB - arXiv ER -