An Interpretable Neuron Embedding for Static Knowledge Distillation

An Interpretable Neuron Embedding for Static Knowledge Distillation

Abstract

Although deep neural networks have shown well-performance in various tasks, the poor interpretability of the models is always criticized. In the paper, we propose a new interpretable neural network method, by embedding neurons into the semantic space to extract their intrinsic global semantics. In contrast to previous methods that probe latent knowledge inside the model, the proposed semantic vector externalizes the latent knowledge to static knowledge, which is easy to exploit. Specifically, we assume that neurons with similar activation are of similar semantic information. Afterwards, semantic vectors are optimized by continuously aligning activation similarity and semantic vector similarity during the training of the neural network. The visualization of semantic vectors allows for a qualitative explanation of the neural network. Moreover, we assess the static knowledge quantitatively by knowledge distillation tasks. Empirical experiments of visualization show that semantic vectors describe neuron activation semantics well. Without the sample-by-sample guidance from the teacher model, static knowledge distillation exhibit comparable or even superior performance with existing relation-based knowledge distillation methods.

Publication
In KDD Workshop on Visualization in Data Science, VDS-KDD 2022
Avatar
Wei Han
Ph.D. Candidate

My research interests include interpretable machine learning, transfer Learning, continual / lifelong learning, few-shot learning and adversarial learning.