Kavli Affiliate: Michael Brenner
| First 5 Authors: Arunachalam Narayanaswamy, Subhashini Venugopalan, Dale R. Webster, Lily Peng, Greg Corrado
| Summary:
Model explanation techniques play a critical role in understanding the source
of a model’s performance and making its decisions transparent. Here we
investigate if explanation techniques can also be used as a mechanism for
scientific discovery. We make three contributions: first, we propose a
framework to convert predictions from explanation techniques to a mechanism of
discovery. Second, we show how generative models in combination with black-box
predictors can be used to generate hypotheses (without human priors) that can
be critically examined. Third, with these techniques we study classification
models for retinal images predicting Diabetic Macular Edema (DME), where recent
work showed that a CNN trained on these images is likely learning novel
features in the image. We demonstrate that the proposed framework is able to
explain the underlying scientific mechanism, thus bridging the gap between the
model’s performance and human understanding.
| Search Query: ArXiv Query: search_query=au:”Michael Brenner”&id_list=&start=0&max_results=3