XAIDATA

Spring School on Explainability of Data Intensive AI Systems at the ETIS laboratory, CY University/ENSEA/CNRS

📍 SHS Auditorium and room MZ03, Rue des Chênes Pourpres, 95000 Cergy
đź“… 28-29/05/2026
Apply here! (free but obligatory - lunch and coffee breaks offered)

Application process

  1. Please fill in the provided form in the application link.
  2. Applications are free.
  3. Coffee breaks and lunches are offered and organised at the school, to promote networking.
  4. 25 participants max

Applications link: Application Form

School Program (Provisional)

28/5/2026 (Day 1)

TimeTitle Speakers/NotesRoom
9:00-9:15 AMSchool OpeningOrganisersAuditorium
9:15-10:00 AMExplainability Basics V. Christophides, E.PitouraAuditorium
10:00-11:00 AMCounterfactuals for Fairness and ExplainabilityDimitris SacharidisAuditorium
11:00 -11:30 AMCOFFEE BREAKCOFFEE BREAKTBA
11:30-12:30 AMCausal Feature Selection for Time Series ForecastingEtienne VareilleAuditorium
12:30 -13:45 PMLUNCH BREAKLUNCH BREAKTBA
14:00-15:00 PM Explainability for Time-to-Event Predictions Apostolos GiannoulidisAuditorium
15:00 -15:20 AMCOFFEE BREAKCOFFEE BREAKTBA
15:20-17:00 PMHands on Workshop: Experimenting with different explainability methods for Predictive MaintenanceApostolos Giannoulidis,Etienne Vareille MZ130

29/5/2026 (Day 2)

TimeTitleSpeakers / NotesRoom
09:00-10:00 AMExplainability for Retrieval Augmented GenerationE.PitouraAuditorium
10:00-11:00AMExplainability for Graph TasksGuillaume RentonAuditorium
11:00 -11:30 AMCOFFEE BREAKCOFFEE BREAKTBA
11:30-12:30 AMExplaining Queries on Inconsistent DatabasesBadran Raddaoui, Katerina Tzompanaki, Yurun GuAuditorium
12:30 -13:45 PMLUNCH BREAKLUNCH BREAKTBA
14:00-15:00 PM Hands on Workshop: Evaluating Explainability through user studies.Luis GalarragaAuditorium
15:00 -15:20 AMCOFFEE BREAKCOFFEE BREAKTBA
15:20-17:00 PMHands on Workshop: Evaluating Explainability through user studies.Luis GalarragaMZ130

Motivation and Objectives

AI systems increasingly influence decisions in science, policy, and daily life. Understanding the data and its relationship to trained models is essential for building safe, reliable, and compliant AI systems across diverse applications, as all model decisions are rooted in training data. To this end three complementary families of interpretability methods have been proposed to shed light on data-intensive automated decision systems: (a) Explainable AI focusing on feature attribution to understand which input features drive model decisions; (b) Data-Centric AI emphasizing data attribution to analyze how training examples shape model behavior; (c) Functional Interpretability examining component attribution to understand how internal model components contribute to outputs. Different interpretability methods are currently used by different tasks of modern pipelines required to build modern AI systems. The proposed ETIS Spring School on the Explainability of Data Intensive AI Systems aims to bring together researchers and students from data management, artificial intelligence, and responsible computing to explore how transparency and interpretability can be effectively integrated into data driven environments. The workshop will investigate how explainability can provide actionable insights in different learning settings as Recommendation Rankings, Time-to-Event predictions, Graph-based Classification or Regression, Retrieval Augmented Generation (RAG) pipelines, Causal Feature Selection and Queries over Inconsistent Data.
With this school we aim to
  • exchange and discuss recent advances in explainability for data intensive AI systems,
  • provide a training and mentoring environment for master students, doctoral and postdoctoral researchers, and early carreer researchers,
  • identify methodological challenges and opportunities for joint research on graph learning, recommendations, causal explanation methods and RAG pipelines,
  • establish future collaborations, including publications, proposals, and student mobility initiatives.
A participation certificate will be provided upon demand, at the end of the school.

Speakers

Vassilis Christophides

Vassilis Christophides, Professor , ENSEA, ETIS laboratory

Prof. Vassilis Christophides studied Electrical Engineering at the National Technical University of Athens (NTUA) in 1988, he received his DEA in computer science from the University PARIS VI in 1992, and his Ph.D. from the Conservatoire National des Arts et Metiers (CNAM) of Paris, in 1996. From September 2020, he joined as Full Professor the École Nationale Supérieure de l’Électronique et de ses Applications (ENSEA), Cergy. Previously, he has served the Computer Science Department of the University of Crete for 16 years. His main research interests span Machine Learning Systems, Data Science and Big Data Computing, Databases and Web Information Systems, as well as Digital Libraries and Scientific Systems. On these topics, he has published over 170 articles in top-tiered journals and conferences. His research work has received more than 8600 citations with an h-index 50 according to Google Scholar. He was a recipient of the 2004 SIGMOD Test of Time Award, and of several best paper awards in BDA (2021), ISWC (2003, 2007, 2009). He chaired (General Chair of the EDBT/ICDT Conference in 2014, Area or Track Chair in KDD 2024&2025, ICDE 2016, SCC 2004, EDBT 2004) or served on program committees of numerous conferences (SIGMOD, VLDB, ICDE, EDBT, WWW, KDD, CIKM, etc.) while he has also acted as reviewer of several journals (CACM, TODS, TOIS, TOIT, VLDB Journal, TDKE, DPS, etc.). He has also been a keynote or invited speaker in conferences and summer schools (PODS 2003, HDMS 2004, ESWC Summer School 2013, WebST 2016, BDA Summer School 2018, GDR RO/IA Summer School 2023, ForgtAI 2026).
Evagglia Pitoura

Evaggelia Pitoura, Professor, University of Ioannina, Archimedes, Athena RC Greece

Evaggelia Pitoura is a Professor at the Department of Computer Science and Engineering at the University of Ioannina and a Lead Researcher at Archimedes Research Unit, Athena RC, Greece. She holds a BEng degree from the University of Patras, Greece, and an MS and PhD from Purdue University, USA. Her current research interests focus on two primary areas: responsible data management, with a focus on fairness, explainability, and their interplay; and on graph exploration and analysis. For her work, he has received best paper awards, a Marie Currie Fellowship and two Recognition of Service Awards from ACM. She is an ACM senior member, founding chair of the Hellenic ACM SIGMOD chapter, and member of the sectorial scientific council of Greece National Council for Research, Technology and Innovation.
Luis Galarraga

Luis Galarraga, Permanent Researcher, INRIA Rennes

Luis Galárraga is a full-time researcher at the IRISA/Inria Rennes research center. His research lies at the crossroads of three axes: pattern mining, knowledge management, and eXplainable AI. His work on eXplainable AI focuses both on the functional and human dimensions of explanations for black-box models trained on different data modalities including tabular data, time series, knowledge graphs, and textual corpora. The ultimate goal of his research is to ensure that AI systems can deliver explanations that are faithful, but also trustworthy and fully understandable to their human recipients. To this end his work also comprises the deployment of user studies that assess the impact of explainable AI systems on key cognitive aspects such as understanding or confidence in AI. He is one of the founders and recurrent organizer of the AIMLAI workshop on Advances in Interpretable Machine Learning and AI since 2019.
Yue Ma

Yue Ma, Associate Professor, University of Paris Saclay

Yue Ma is an Associate Professor at Unviersity Paris-Saclay and the LISN laboratory in France. Her research interests include inconsistency handling and measuring for knowledge bases, semantic web, ontology modularization, description logic based ontology construction. She has published in top-tier conferences or journals (AAMAS, KR, ECAI, ISWC, ESWC, JELIA, K-CAP, etc.) and has served as PC of major international conferences/journals (AAAI, IJCAI, IJAR, ECAI, ISWC). She has co-organised the first and second International Workshop on Hybrid Question Answering with Structured and Unstructured Knowledge with WWW2018 and K-CAP2019.
Badran Raddaoui

Badran Raddaoui, Associate Professor, Télécom SudParis, Samovar Laboratory

Badran Raddaoui is an Associate Professor in the Computer Science Department at Télécom SudParis and the Polytechnic Institute of Paris. His research focuses on knowledge representation, reasoning under inconsistency, non-monotonic reasoning, and argumentation theory. He also explores the integration of symbolic AI methods with data and graph mining, as well as their application to the explainability of machine learning models. His contributions have been published in leading AI venues (IJCAI, AAMAS, KR, CP, etc.) More recently, his work has extended to the development of logic-based approaches for enhancing the reasoning capabilities of large language models. He regularly serves on the program committees of several top-tier AI conferences (e.g., IJCAI, AAAI, KR, AAMAS).
 Guillaume Renton

Guillaume Renton, Associate Professor, ENSEA, ETIS laboratory

Guillaume Renton is an Associate Professor at the École Nationale Supérieure de l’Électronique et de ses Applications (ENSEA) at Cergy. His main research interests include Graph Learning and Graph Neural Networks, and their applications in Knowledge Graphs, Molecular Graphs. He is also interested in Graph Factual and Counterfactual Explainability for Graph Classification and Regression, but also for Generation and in an AI for Science point-of-view.
Dimitris Sacharidis

Dimitris Sacharidis, Assistant Professor, Université Libre de Bruxelles, Belgium

Dimitris Sacharidis is an assistant professor at the Data Science and Engineering Lab of the Université Libre de Bruxelles, Belgium. He is also a member of the AI for Common Good Institute (FARI) Brussels, leading the trust flagship project. Prior to that he was an assistant professor at the Technical University of Vienna, and a Marie Skłodowska Curie fellow at the ``Athena'' Research Center and at the Hong Kong University of Science and Technology. He finished his PhD and undergraduate studies on Computer Engineering at the National Technical University of Athens, while in between he obtained an MSc in Computer Science from the University of Southern California. His research interests revolve around responsible AI, focusing on topics such as explainability, algorithmic fairness, model safety and trust.
Katerina Tzompanaki

Katerina Tzompanaki, Associate Professor, Cergy Paris University, ETIS laboratory

Katerina Tzompanaki is an Associate Professor at CY Cergy Paris University, and the ETIS laboratory in France. Previously, she has been a visiting researcher at Télécom SudParis Palaiseau, a post-doctoral researcher at Télécom ParisTech, and a PhD candidate at University of Paris Saclay. She has obtained her Electrical and Computer science engineering diploma from the National Technical University of Athens. Her research focuses on the explainability of data processes and machine learning algorithms. Her work has been published in top-tier international conferences such as VLDB, ICDE, CIKM, EDBT, PAKDD, ISWC among others. She has previously co-organised the `Forging Trust In Artificial Intelligence' Workshop 2024 and 2025, co-located with the 'International Joint Conference on Neural Networks (IJCNN)' conference. Moreover, she has served as Publicity co-chair for the `International Conference of Web Engineering 2024' (ICWE24) and currently serves as the Publications co-Chair of the International Conference on Information Technology for Social Good (GoodIT26). Finally, she regularly serves in the program committees of top-tier conferences like IJCAI, VLDB, SIGMOD, AAAI, EDBT, and CIKM.
Apostolos Giannoulidis

Apostolos Giannoulidis, Post-Doc, Cergy Paris University, ETIS laboratory

Since October 2025, Apostolos Giannoulidis has been a postdoctoral researcher at CY Cergy Paris University, and a member of the DATA & AI group at the ETIS laboratory. He defended his doctoral thesis, entitled “Non-supervised failure prediction in dynamic environments,” in August 2025. His PhD research was conducted at the Aristotle University of Thessaloniki, under the supervision of Professor Anastasios Gounaris, and focused on time-series anomaly detection and predictive maintenance in dynamic environments. Before joining CY Cergy Paris University, he was a research assistant at the Data Lab of the Aristotle University of Thessaloniki, collaborating with Atlantis Engineering and Istognosis Ltd on industrial and fleet predictive maintenance projects. He received his Bachelor’s degree in Informatics from the Aristotle University of Thessaloniki in 2020, graduating among the top 5% of his class.

Organisers

  • Vassilis Christophides (ETIS, CNRS, ENSEA, CYU, France)
  • Evi Pitoura (University of Ioannina, Greece)
  • Dimitris Kotzinos (ETIS, CNRS, ENSEA, CYU, France)
  • Katerina Tzompanaki (ETIS, CNRS, ENSEA, CYU, France)

Sponsors

PANDORA project AIDA project EXPIDA project ANR logo CY Advanced Studies logo

Contact

For general enquiries, program questions, or travel information, contact the organisers.