RegExplainer: Generating Explanations for Graph Neural Networks in Regression Task

Published in Learning on Graphs Conference, 2023

Graph regression is a fundamental task that has gained significant attention in various graph learning tasks. However, the inference process is often not easily interpretable. Current explanation techniques are limited to understanding GNN behaviors in classification tasks, leaving an explanation gap for graph regression models. In this work, we propose a novel explanation method to interpret the graph regression models (XAIG-R). Our method addresses the distribution shifting problem and continuously ordered decision boundary issues that hinder existing methods away from being applied in regression tasks. We introduce a novel objective based on the information bottleneck theory and a new mix-up framework, which could support various GNNs in a model-agnostic manner. Additionally, we present a contrastive learning strategy to tackle the continuously ordered labels in regression tasks. We evaluate our proposed method on two benchmark datasets and a real-life dataset introduced by us, and extensive experiments demonstrate its effectiveness in interpreting GNN models in regression tasks.

Download paper here

Recommended citation: Jiaxing Zhang, Zhuomin Chen, Hao Mei, Dongsheng Luo, Hua Wei. 2023. RegExplainer: Generating Explanations for Graph Neural Networks in Regression Task. Preprint at Arxiv.

Recommended citation: Jiaxing Zhang, Zhuomin Chen, Hao Mei, Dongsheng Luo, Hua Wei. 2023. RegExplainer: Generating Explanations for Graph Neural Networks in Regression Task. Preprint at Arxiv. https://https://arxiv.org/abs/2307.07840