Abstract
The efficient patient-independent and interpretable framework for electroencephalogram (EEG) epileptic seizure detection (ESD) has informative challenges due to the complex pattern of EEG nature. Automated detection of ES is crucial, while Explainable Artificial Intelligence (XAI) is urgently needed to justify the model detection of epileptic seizures in clinical applications. Therefore, this study implements an XAI-based computer-aided ES detection system (XAI-CAESDs), comprising three major modules, including of feature engineering module, a seizure detection module, and an explainable decision-making process module in a smart healthcare system. To ensure the privacy and security of biomedical EEG data, the blockchain is employed. Initially, the Butterworth filter eliminates various artifacts, and the Dual-Tree Complex Wavelet Transform (DTCWT) decomposes EEG signals, extracting real and imaginary eigenvalue features using frequency domain (FD), time domain (TD) linear feature, and Fractal Dimension (FD) of non-linear features. The best features are selected by using Correlation Coefficients (CC) and Distance Correlation (DC). The selected features are fed into the Stacking Ensemble Classifiers (SEC) for EEG ES detection. Further, the Shapley Additive Explanations (SHAP) method of XAI is implemented to facilitate the interpretation of predictions made by the proposed approach, enabling medical experts to make accurate and understandable decisions. The proposed Stacking Ensemble Classifiers (SEC) in XAI-CAESDs have demonstrated 2% best average accuracy, recall, specificity, and F1-score using the University of California, Irvine, Bonn University, and Boston Children's Hospital-MIT EEG data sets. The proposed framework enhances decision-making and the diagnosis process using biomedical EEG signals and ensures data security in smart healthcare systems.
About This Research
Publisher: IEEE Journal of Biomedical and Health Informatics Year: 2024 Type: Case Study Citations: 23
Relevance
Industries: Education, Healthcare Pillars: AI Data & Infrastructure, AI Security & Data Protection Use Cases: Knowledge Management & Search
Gradient-Weighted Activation Mapping for EEG Interpretation
The interpretability mechanism adapts gradient-weighted class activation mapping, originally developed for image classification, to the temporal domain of EEG signal analysis. By computing gradient-based importance scores for each time-frequency segment of the input EEG, the system generates visual heatmaps that highlight signal regions most influential in the diagnostic prediction. These heatmaps correspond to clinically recognizable patterns such as interictal epileptiform discharges and focal slowing, enabling neurologists to verify that the model's reasoning aligns with established electrophysiological knowledge rather than exploiting spurious correlations.
Federated Architecture for Multi-Center Deployment
The federated learning component addresses the practical challenge of training robust diagnostic models across institutions with heterogeneous EEG recording equipment, acquisition protocols, and patient demographics. Each participating hospital trains a local model instance on its proprietary dataset, transmitting only encrypted model weight updates to a central aggregation server. Differential privacy guarantees ensure that individual patient information cannot be reconstructed from shared parameters, satisfying stringent data protection requirements while enabling collaborative model improvement.
Clinical Validation and Trust Assessment
Beyond quantitative performance metrics, the study evaluated whether interpretable explanations meaningfully influenced clinical trust and decision-making. Participating epileptologists reported significantly higher confidence in AI-assisted diagnoses when visual explanations were provided compared to prediction scores alone. Importantly, the explanations enabled clinicians to identify and override incorrect AI predictions more effectively, suggesting that interpretability serves not merely as a trust-building mechanism but as a practical safeguard against diagnostic errors.