可解释AI

小组介绍

Explainable AI小组聚焦于可视化中的可解释人工智能研究,运用可视化技术与交互设计手段,对复杂人工智能模型的结构特征、计算逻辑、决策路径及输出结果进行直观呈现与深度解析。其核心目标在于将抽象的算法机制、数据特征与推理过程转化为易于感知、理解和追溯的可视化形态,帮助研究者、开发者与使用者清晰把握模型的工作原理、优势局限与潜在偏差,从而有效提升人类对人工智能系统的认知水平、使用信心与信任度,为人工智能的安全应用、合规监管与持续优化提供重要支撑。

相关发表
  • InterpretStack: Interpretable Exploration and Interactive Visualization Construction of Stacking Algorithm.
    Yu Wang, Jing Lu, Le Liu, Junping Zhang, Siming Chen*.
    VLDB Workshops 2024.
  • GraphInterpreter: a visual analytics approach for dynamic networks evolution exploration via topic models.
    Lijing Lin, Jiacheng Yu, Fan Hong, Chufan Lai, Siming Chen, Xiaoru Yuan
    Journal of Visualization, 2024.
  • Interpreting High-Dimensional Projections With Capacity.
    Yang Zhang, Jisheng Liu, Chufan Lai, Yuan Zhou, Siming Chen*.
    IEEE Transactions on Visualization and Computer Graphics (TVCG), Accepted, 2023.
  • Visual Explanation for Open-domain Question Answering with BERT.
    Zekai Shao, Shuran Sun, Yuheng Zhao, Siyuan Wang, Zhongyu Wei, Tao Gui, Cagatay Turkay, Siming Chen*.
    IEEE Transactions on Visualization and Computer Graphics (TVCG), Accepted, 2023.
    | Paper | pdf (14.0MB) | Video | mp4 (25.3MB)