Special Session
Special Session on Trustworthy and Explainable AI: Interdisciplinary Advances in Multimodal Data Analysis

Special Session on Trustworthy and Explainable AI: Interdisciplinary Advances in Multimodal Data Analysis

Co-Chairs:  

Ángel Miguel García Vico, Department of Informatics, University of Jaen, Spain
Muhammad Afzal, Department of Computer Science, Birmingham City University, United Kingdom


Scope of the Special Session:

The integration of Artificial Intelligence (AI) is revolutionizing how image data, often in conjunction with associated text, is processed, analyzed, and leveraged for critical decision-making. From enhancing diagnostic precision in medical imaging to enabling precision agriculture and ensuring quality control in manufacturing, AI-driven image analysis has become a cornerstone of modern innovation. However, the adoption of these powerful tools in high-stakes domains hinges on our ability to trust their outputs and understand their reasoning.

This special session is dedicated to the theoretical and practical advances in building trustworthy and explainable AI systems for real-world image analysis, with an additional interest in multimodal approaches that integrate textual analysis. We invite submissions on machine learning, deep learning, and computer vision that address the entire intelligent processing pipeline—from data acquisition and annotation to model training, validation, deployment, and post-hoc interpretation. We place a special emphasis on novel approaches that demonstrate robustness, fairness, transparency, and effectiveness, particularly in scenarios requiring high-resolution image analysis.

This session offers a unique platform for researchers and practitioners from diverse fields to present cutting-edge research, share deployment strategies, and discuss current challenges. We aim to foster a dialogue that pushes the boundaries of what intelligent systems can achieve with visual and related textual data, paving the way for more reliable and human-centric AI in healthcare, agriculture, manufacturing, and beyond.

Topics of interest include, but are not limited to:

  • Explainable AI (XAI) and interpretability methods for image and multimodal analysis
  • Multimodal AI: Fusing image, text, and other data modalities
  • Model and data uncertainty quantification
  • Robustness to adversarial attacks and distribution shifts
  • Fairness, bias, and equity in computer vision and language models