• Last updated: Wed, May 28, 2025Status: Ongoing
  • Atiye Sadat Hashemi, Yana Litinska, Jonas Björk, Dominik Dietler, Mattias Ohlsson, and Amir Aminifar

Explainable AI for Disease Outbreak Surveillance: Enhancing Transparency in Anomaly Detection Models

As technology progresses from symbolic and statistical AI to deep machine learning models, the increasing complexity of these systems, often referred to as “black boxes” due to their numerous layers and parameters, underscores the growing importance of Explainable AI (xAI). xAI refers to methods that clarify the decision-making processes of AI systems, ensuring that humans remain actively involved in the loop. This project explores how xAI can enhance transparency and trust in anomaly detection models used for disease outbreak surveillance. We focus on integrating xAI techniques, such as SHAP values and interpretable neural network structures, into time series anomaly detection frameworks tailored to public health data. By combining domain expertise with data-driven modeling, we aim to develop systems capable of identifying and clearly explaining abnormal patterns in real time.

Triangle Exclamation Virus Covid Clock File Pdf Bars Progress User Pen