Interpretable Low-Level Vision

Research works about interpreting and explaining low-level vision networks

Discovering "Semantics" in Super-Resolution Networks

arXiv, 2021
Yihao Liu*, Anran Liu*, Jinjin Gu, Zhipeng Zhang, Wenhao Wu, Yu Qiao, Chao Dong
Discovering

Can we find any “semantics” in SR networks? In this paper, by analyzing the feature representations with dimensionality reduction and visualization, we successfully discover the deep semantic representations in SR networks, i.e., deep degradation representations (DDR), which relate to the image degradation types and degrees.

Read More

Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution

Neural Information Processing Systems (NeurIPS), Spotlight, 2021
Liangbin Xie*, Xintao Wang*, Zhongang Qi, Chao Dong, Ying Shan
Finding Discriminative Filters for Specific Degradations in Blind Super-Resolution

Recent blind super-resolution methods typically consist of two branches, one for degradation prediction and the other for conditional restoration. Our experiments show that a one-branch network can achieve comparable performance to the two-branch scheme. How can one-branch networks automatically learn to distinguish degradations? We propose a new diagnostic tool – Filter Attribution method based on Integral Gradient (FAIG), which aims at finding the most discriminative filters instead of input pixels/features for degradation removal in blind SR networks. Our findings can not only help us better understand network behaviors inside one-branch blind SR networks, but also provide guidance on designing more efficient architectures and diagnosing networks for blind SR.

Read More

Interpreting Super-Resolution Networks with Local Attribution Maps

Computer Vision and Pattern Recognition (CVPR), 2021
Jinjin Gu, Chao Dong
Interpreting Super-Resolution Networks with Local Attribution Maps

SR networks are mysterious and little works make attempt to understand them. In this work, we perform attribution analysis of SR networks, which aims at finding the input pixels that strongly influence the SR results. We propose a novel attribution approach called local attribution map (LAM) to interpret SR networks. Our work opens new directions for designing SR networks and interpreting low-level vision deep models.

Read More