A Systematic Literature Review on the Application of Explainable AI Approaches in the Assessment of Autism Spectrum Disorder
Abdulmalik Ahmad Lawan 1,2,3,†,*
, Hauwa Zakariyya Abdullahi 1,†
, Abdullahi Yunusa Abdullahi 1,†
, Sadiya Tahir 4,†![]()
-
Department of Computer Science, Aliko Dangote University of Science and Technology, Wudil 713281, Nigeria
-
Department of Computer Science, Maryam Abacha American University of Nigeria, Kano, Nigeria
-
Department of Computer Science, Al-Istiqama University Sumaila, Sumaila, Nigeria
-
Department of Pediatrics, Murtala Muhammad Specialist Hospital, Kano 700251, Nigeria
† These authors contributed equally to this work.
* Correspondence: Abdulmalik Ahmad Lawan![]()
Academic Editor: Raul Valverde
Special Issue: New Concepts and Advances in Neurotechnology
Received: January 16, 2025 | Accepted: August 06, 2025 | Published: August 14, 2025
OBM Neurobiology 2025, Volume 9, Issue 3, doi:10.21926/obm.neurobiol.2503298
Recommended citation: Lawan AA, Abdullahi HZ, Abdullahi AY, Tahir S. A Systematic Literature Review on the Application of Explainable AI Approaches in the Assessment of Autism Spectrum Disorder. OBM Neurobiology 2025; 9(3): 298; doi:10.21926/obm.neurobiol.2503298.
© 2025 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder associated with significant heterogeneity in symptoms and comorbid conditions. Studies have shown the value of early and accurate diagnosis in mitigating the associated social and behavioral challenges. Recently, studies have demonstrated the potential of machine learning (ML) models in identifying ASD patterns from neuroimaging, genetic, and behavioral data. However, the limited interpretability of these models poses a barrier to their clinical adoption. Explainable Artificial Intelligence (XAI) offers a solution that allows clinicians to understand how diagnostic decisions are made. Based on a systematic literature review, the present study provides a thematic synthesis of 34 relevant studies that could assist in the creation of proper clinical instruments and identifies key challenges and the need for future studies to propose clinician-centered explainability metrics.
Keywords
Autism; XAI; machine learning; diagnosis; artificial intelligence
1. Introduction
Autism Spectrum Disorder (ASD) is a lifelong neurodevelopmental condition diagnosed based on the assumptions of deficits in socialization, communication, and the presence of restrictive and repetitive patterns of behavior [1]. Diagnosing ASD presents unique challenges, given its heterogeneity across individuals and comorbidity with other neurodevelopmental conditions [2,3]. Traditionally, ASD diagnosis relies on structured behavioral assessments using gold-standard tools such as the Autism Diagnostic Observation Schedule (ADOS) [4,5] and the Autism Diagnostic Interview-Revised (ADI-R) [6,7]. The subjectivity in scoring these tools, the labor-intensive activities involved, and the need for trained professionals to administer them pose numerous challenges, especially in settings with limited resources [8,9,10]. Given these challenges, several studies highlighted the efficacy of embedding machine learning (ML) models in ICT tools for improved diagnostic accuracy and accessibility [11,12,13,14,15,16]. Accordingly, promising potentials of ML modeling were demonstrated on behavioral, genetic, and neuroimaging data with enhanced accuracy of ASD diagnostics based on popular algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN), and Random Forests (RF) [17,18,19]. In addition, despite their diagnostic accuracy, the “black box” nature of the popular ML models obscures clinicians from understanding their decision-making processes and alignment with the fundamental assumptions for ASD assessment [20,21,22,23,24,25,26]. This lack of transparency was a significant barrier to the real-life adoption of ML-based ASD diagnostic tools. Consequently, recent studies adopted Explainable Artificial Intelligence (XAI) approaches such as Shapley Additive exPlanations (SHAP), LIME - Local Interpretable Model-agnostic Explanation (LIME), and Class Activation Mapping (CAM) to address the “black box” challenges, and to achieve both accuracy and clinical relevance in ASD assessment [24,27,28,29,30,31].
However, despite the advances with XAI, their vague precision and lack of precise alignment with diagnostic assumptions and practices peculiar to ASD present a critical gap in their real-world applicability [23,24,32,33,34,35,36,37]. For instance, Joudar [38] conducted a systematic review of 46 studies on the AI application trends in ASD diagnosis, triage, and prioritization, and highlighted the potential of emerging approaches, including XAI. Recently, Viswan [39] categorized the various popular models and frameworks and discussed their clinical relevance, limitations, and prospects related to applying XAI in Alzheimer’s diagnosis. Similarly, based on the systematic review of 23 studies, Vimbi [40] highlighted the roles, limitations, and prospects of LIME and SHAP frameworks in the interpretation of AI models for Alzheimer’s disease detection. A more generic look for medical image diagnosis was provided by Kong [41], who emphasized the need for human-centered XAI. There is a need for a definitive explanation of the application of the XAI approaches in the assessment of ASD and a proposal for their relevance to real-life clinical settings.
The present systematic literature review examines studies on the application of XAI approaches in ML-based ASD assessments. The study provides a thematic synthesis of the literature based on the performance of the commonly utilized models and frameworks with a focus on their computational power and alignment with clinical practices and fundamental diagnostic assumptions. Consequently, current challenges and opportunities for improvement are explored. In essence, the study aims to highlight “How XAI approaches can be applied and evaluated with alignment to clinical diagnostic practices and assumptions for ASD?” Thus, the findings from this study will provide valuable insights for researchers, clinicians, and relevant stakeholders on developing ML models that are accurate, interpretable, and feasible for real-world implementation in ASD assessments.
2. Methodology
2.1 Search Strategy
The present study adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [42] guidelines to ensure a structured approach to searching, selecting, and analyzing studies relevant to the research aim. We employed a commonly accepted approach among relevant literature on ASD screening and interventions [11,43,44]. The present search was conducted in December 2024 across four relevant major databases that are reputable in the fields of computing and neurodevelopmental disorders, viz: ACM Digital Library, PubMed, IEEE Xplore, and Scopus. The search keywords were carefully chosen to encompass essential concepts within the research domain, specifically, we combined “Autism”, OR “Autism Spectrum Disorder”, AND “Explainable”, as the primary search keywords while “Interpretable”, “Explainable AI”, “Artificial Intelligence” and “Machine Learning” were attached to the query over repeated searches made using the “OR” logical operator. These keywords were utilized across the databases to check the publications’ metadata of titles, abstracts, author key terms, and journal index terms to maximize the retrieval of the most relevant studies. The series of searches yielded twenty-one records from ACM Digital Library (n = 21), ninety-one records from PubMed (n = 91), thirty-two records from IEEE Xplore (n = 32), and one hundred and three records from Scopus (n = 103), making a total of two hundred and forty-seven records (n = 247) before duplicates removal.
2.2 Selection Criteria
Based on the PRISMA guidelines, the authors’ predefined inclusion and exclusion criteria were employed to select the most relevant studies to be thematically synthesized. The inclusion criteria were formed to focus on full-text journal articles, conference papers, and book chapters published in English that utilized XAI approaches in assessing ASD and involving empirical analysis of behavioral, brain imaging, or genetic datasets. The present study’s exclusion criteria focused on studies unrelated to XAI, studies not on ASD, and studies with inaccessible full texts. Additionally, studies on any data modalities without integration of XAI techniques were equally excluded. The screening process began with removing sixty-two duplicated records (n = 62), resulting in one hundred and eighty-five unique records (n = 185). Thirty-one records were eliminated for being editorial materials (n = 21), four records without full-texts (n = 4), eight literature reviews (n = 8), and three non-English records (n = 3). The authors downloaded and screened one hundred and forty-nine (149) full-text articles for eligibility and relevance out of which one hundred and six documents (n = 106) were further discarded because they are not focused on XAI (n = 40), not focused on ASD (n = 11), have utilized other non-ML approaches (n = 26), and intervention studies (n = 29). Out of the forty-three articles (n = 43) remaining, fourteen articles (n = 14) were further removed for reasons such as so many ambiguities in the studies’ aim and methodology (n = 5), and misalignment with the present study's aim (n = 4). Finally, thirty-four (n = 34) studies met the inclusion criteria and were included in the final literature synthesis. The summary of the inclusion and exclusion criteria of the study is provided in Table 1.
Table 1 Inclusion/Exclusion criteria.

2.3 Quality Assessment
A quality assessment strategy was applied to the included studies to ensure the quality and reliability of this systematic literature review. This strategy evaluated aspects such as methodological soundness, applicability, and relevance of the XAI approach to the ML-based ASD assessment, as well as the reliability of the data sources and robustness of the analysis involved. Two co-authors independently assessed the study for quality, and open discussions were employed to maintain an unbiased and rigorous systematic review process. Mendeley Desktop v.1.19.3 reference management software was utilized to organize and track the literature sources and ensure efficient annotation, collaboration, and data extraction.
2.4 Data Extraction
In the data extraction phase, essential information was systematically collected from the final set of thirty-four studies. Extracted details included the year of publication, number of citations, dataset sources, XAI approaches utilized, ML models employed, evaluation metrics, explainability parameters considered, and key findings. This structured data was synthesized to provide a comprehensive understanding of the application of XAI techniques in ASD assessment. The PRISMA flow diagram (Figure 1) presents an overview of the identification, screening, and selection process at each stage.
Figure 1 PRISMA chart of the study.
3. Results
3.1 Descriptive Analysis of XAI and ML Trends in ASD Assessments
The trend analysis of XAI and ML studies applied to ASD diagnosis shows the infancy of the study area and a notable increase in publications over the years, as demonstrated in Figure 2. Based on the systematic search conducted in the present study, relevant publications began with one article in 2020 [45], and six articles in 2021 [24,25,46,47,48,49] signaling initial interest. Although a few attempts, including the initial one, were not explicitly framed as XAI, they already reflect certain aspects of it [13,45,50]. For instance, Lanciano [45] and Lawan [13] conducted comparative evaluations of mathematical and empirical interpretable algorithms against ML models, focusing on accuracy and explainability. On the publication trend, a gradual expansion was noticeable, with seven (7), nine (9), and eleven (11) articles appearing in 2022 [31,51,52,53,54,55,56], 2023 [57,58,59,60,61,62,63,64,65], and 2024 [66,67,68,69,70,71,72,73,74,75,76], respectively. The consistent increase might reflect the rising recognition in computing and healthcare research communities of the necessity for transparency in ML models for ASD, as identified in recent studies [24,26,27,31,36,46,54,60,77].
Figure 2 Publication across years.
Publications span across multiple sources of different types, with twenty-four journal articles from various sources such as IEEE Transactions on Medical Imaging [67,69], Computers, Materials and Continua [54,55], Diagnostics [70], Artificial Intelligence in Medicine [61], and others [46,48,49,51,52,56,57,59,60,63,64,71,72,73,74,75,76], nine articles featured in conference proceedings [24,25,31,45,53,58,62,65,68] and a single book chapter [47]. These sources emphasized the interdisciplinary studies needed on the integration of ML techniques with clinical needs for ASD diagnostics. The distribution of the articles based on the different source types is depicted in Figure 3.
Figure 3 Publications source types.
3.2 Citation Patterns and Key Publishers
Figure 4 and Figure 5 present the distribution of the different publishers based on the sum of citation count and number of articles, respectively. The most influential sources were published by Springer, Elsevier, ACM, and IEEE having 6 articles [24,31,53,62,72,74] accumulating 205 citations, 7 articles [47,48,51,59,61,64,75] with 153 citations, 2 articles with 75 citations, and 6 articles [49,58,65,67,68,69] accumulating 53 citations, respectively. This demonstrates these publishers’ dominance in disseminating applied research on XAI for ASD assessments.
Figure 4 Distribution of citations by publisher.
Figure 5 Distribution of articles by publisher.
Figure 6 shows that citation volume peaked in 2021 with 226 citations, aligning with the then-increased clinical interest in integrating AI tools that support interpretability and decision-making in ASD diagnostics. The two most frequently cited works appear in the Lecture Notes in Computer Science [24,31], followed by the British Journal of Psychiatry [52], Medical Image Analysis [48], IEEE Access [49], and Artificial Intelligence in Medicine [61] which all cover the scope on both trends in AI and their applications in healthcare. This alignment of theoretical and applied research highlights the critical role of interdisciplinary studies in shaping explainable diagnostic tools for ASD.
Figure 6 Citation across years.
Citation count by source types (Figure 7) indicated the popularity of the relevant journal articles with the highest number of citations (n = 336), followed by conference papers (n = 266), and book chapters (n = 25). There might not be a significant implication for the difference in citations based on source types, as we identified more relevant journal articles (n = 24) than conference papers (n = 9) and a book chapter (n = 1), as shown in Figure 3.
Figure 7 Citations across source types.
3.3 Algorithms Utilized in the Included Studies
The articles included in the present study applied various algorithms to achieve both diagnostic accuracy and interpretability. As shown in Table 2, SVM [24,25,31,47,49,53,56,60,67,70,72,74,75], DNN [31,47,51,52,53,57,58,60,61,68,71], and RF [60,62,63] are the most employed algorithms, each used in multiple studies. CNNs were particularly effective for neuroimaging data, where their deep structure allows for extracting intricate patterns in brain scans [46,52,69,78]. While SVMs are favored for their robustness in high-dimensional data settings, where straightforward model interpretability alongside classification accuracy is highly demanded [25,31,58,74,79]. On the other hand, Random Forest models were employed for their ensemble-based interpretability, with tree-based structures providing a higher transparency on how input features impact diagnostic outcomes [53,64,74,80,81,82,83,84]. As shown with the help of Table 2, the combination of these algorithms demonstrates researchers’ preferences on the balance between model complexity and interpretability, which is essential for diagnostic tools in clinical environments.
Table 2 Extracted information table.

3.4 Datasets Utilized in the Studies
The datasets predominantly utilized across studies were the Autism Brain Imaging Data Exchange (ABIDE) [46,47,48,51,52,56,67,68,69] and the UCI ASD-Test dataset [31,53,54,55,60,70]. ABIDE provides a rich collection of sMRI and fMRI ASD and TD neuroimaging data essential for model training and validation [95,96]. At the same time, the UCI ASD-Test dataset is behavioral data based on responses to demographic questions as well as Q-CHAT 10 and AQ 10 items for children, adolescents, and adults [86]. Other studies utilized behavioral data based on facial images [57,58], eye tracking [25,71], and recorded videos [64]. Genomic [63,72], medical, and metrological records [30,59,74,76] were equally utilized. Notably, some of the studies employed multi-modal data to build comprehensive diagnostic models that capture behavioral, genetic, medical, and neuroimaging complexities of ASD [59,61,63,66].
3.5 XAI Approaches and Evaluation Metrics Utilized in the Studies
The articles included in the present study have shown that SHAP [54,59,64,65,66,72,75], LIME [24,31,53,54,70,74], and Grad-CAM [46,57,58,62] were the most widely applied XAI approaches. Studies have utilized SHAP to attribute specific features to individual predictions for a granular understanding of model decisions. This is particularly effective in understanding the alignment of diagnostic predictions with clinical assumptions, such as linking neuroimaging predictors to ASD-related behaviors. LIME validates the quality of decision-making with a localized examination of model behavior, which ensures that specific predictions are clinically sound. Meanwhile, Grad-CAM was found to be effective with CNN-based modeling of brain imaging data for clinicians to validate the ROI associated with ASD in scans.
In evaluating the effectiveness of the ML algorithms, standard metrics including accuracy, precision, recall, and the F1-score were employed in assessing the computational performance of the models. Particularly, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) were frequently utilized to evaluate classifier performance across different threshold levels and offer a summarized metric of model accuracy across varying sensitivity-specificity trade-offs. SHAP, LIME values, and Grad-CAM heatmaps were reported alongside these computational performance metrics to provide both numeric and visual illustrations for feature-level contributions.
Generally, the findings from the included studies as highlighted in the foregoing results have indicated an increasing integration of XAI and ML techniques and emphasize their importance toward ensuring clinical relevance within ML-based ASD diagnostics. Particularly, the key motivation is to balance diagnostic accuracy with interpretability, which is necessary for the clinical validation of the ML models. Hence, linking XAI metrics with established symptoms of ASD could be instrumental in making informed clinical decisions.
4. Discussion
4.1 Preferred ML Models for the XAI Approaches
Relevant studies on the application of ML for ASD assessment have explored various algorithms that depicted promising results aimed at improving clinical relevance apart from diagnostic accuracy. In essence, many of these algorithms yielded models that are characterized by complex learning styles in depicting both linear and nonlinear patterns from behavioral, genetic, and neuroimaging ASD datasets. Previous studies have explored notable ML models’ strengths and weaknesses, including SVM, DNN, and RF. Particularly, relevant neuroimaging studies have utilized variants of SVM due to their robustness and relevance in interpretable ML diagnosis of ASD based on high-dimensional data of sMRI and fMRI images [24,25,31,47,49,53,56,60,67,70,72,74,75]. For instance, Eslami et al. [47] combined SVM with DL in the classification of ASD brain scans from neurotypical ones and consistently exhibited high accuracy and claimed explainability of the decisions made by the technique. Similarly, Jung [67] derived explainable patterns in ABIDE fMRI from masked and other ROIs with multiple ML models, including SVM, and recorded the superior performance of a stacked auto-encoder without a mask in all metrics except for specificity. Accordingly, studies have indicated the high predictive power of SVM models in identifying ASD patterns from various datasets despite their limited interpretability to non-computing professionals, which presents a peculiar challenge to clinical relevance and adoption.
Similarly, variants of DNN have also been extensively investigated for ASD assessments due to their ability to depict complex, nonlinear relationships within large datasets [31,47,51,52,53,57,58,60,61,68,71]. For instance, Mayor Torres [61] recovered the most relevant features from the pre-trained CNN model based on multi-modal data comprising emotional features captured through facial images of ASD with their corresponding EEG. The study applied multiple XAI approaches such as LRP, PatternNet, and Pattern-Attribution. Equally, Kang [58] explored a facial emotions symptom analysis using CNN and provided an explainable diagnostic framework for early diagnosis of ASD based on Grad-CAM. Generally, DNN models, especially CNN and RNN, are effective in processing neuroimaging data and identifying complex patterns associated with ASD. However, the “black-box” nature of DNN is the key limitation to their direct clinical applicability [56,63]. Hybrid models with rule-based surrogates or naturally interpretable architectures like attention mechanisms can provide enhanced clinical insight without sacrificing accuracy to mitigate DNN opacity. Thus, the utility of XAI approaches to interpret DNN predictions is crucial to clinical relevance.
Nonetheless, RF models, being derivatives of decision trees, have been explored due to their inherent interpretability in terms of feature importance across diverse data types [60,62,63]. This is crucial in facilitating a better understanding of ASD-related biomarkers and helps align the model’s output with clinical diagnostic assumptions [49]. Alternatively, studies have employed surrogate models such as linear regression [31,53,63,74,75,76] and decision trees [48,60,66,76] to approximate the behavior of more complex “black-box” models and provide clinicians with a closer look at the models’ decision-making processes.
As shown in Table 3, SVM is the most common, but DNNs yield higher accuracy with slightly less interpretability.

4.2 Commonly Utilized XAI Approaches
Incorporating explainability approaches in ML-based ASD diagnosis is essential to bridging the gap between computational performance and clinical applicability. The articles included in the present study employed a range of methods for interpreting the outputs of ML models with their trade-offs as shown in Table 4. Firstly, a significant trend of studies has presented SHAP [54,59,64,65,66,72,75] and LIME [24,31,53,54,70,74] to explain how specific features contributed to the models’ diagnosis of individual cases. For instance, Xu [66] utilized SHAP and interpreted the decisions of multiple classifiers in the investigation of the relationship between physical fitness, gray matter volume, and ASD severity in children. Biswas [24] demonstrated the efficacy of LIME in explaining feature importance without sacrificing the model’s performance. Similarly, Magboo [53] provided an LIME interpretation of the most essential feature in ASD predictions by multiple classifiers based on behavioral data and identified the cost-effectiveness and clinical relevance of the XAI approach. Recently, Jeon [70] utilized multiple XAI approaches, including SHAP and LIME, and emphasized the importance of rigorous data preprocessing in improving models’ performance, generalizability, and real-world applicability across diverse clinical datasets.

Pediatricians can better comprehend DNN outputs through visual saliency maps thanks to the EyeXplain system [25], which has been used in clinical settings to analyze eye-tracking data for ASD diagnosis. This proves that integrating XAI into real diagnostic procedures is feasible.
4.3 Toward Multimodal Data for Enhanced ASD Assessment
Integration of multimodal data is a promising approach to enhancing explainable ASD assessment with more informed and clinically relevant predictions. Studies have demonstrated that a more comprehensive representation of the neurobiological and behavioral characteristics of ASD, distinguishing it from comorbid conditions, could be provided by utilizing data with multiple forms, such as a hybrid of imaging and behavioral data [47,61,72]. Primarily, multimodal approaches could provide complementary information from different sources. For instance, a broader range of ASD-relevant biomarkers could be provided by combining sMRI, fMRI, and behavioral data, and that could lead to better identification of subtle insights on ASD with improved accuracy [48,64,72,73,74]. Categorically, early and late fusion techniques have been widely explored in multimodal data integration [61]. While early fusion involves merging data from multiple sources before model training, late fusion combines the outputs of separate models trained on distinct datasets. The former allows for enriched feature space, and the latter enables each model to specialize in one data type before its predictions are aggregated; both can provide unbiased predictions. This is particularly valuable in ASD assessment, where heterogeneity across patient populations is high [3,52,97]. However, there are challenges in implementing multimodal approaches due in part to their computational intensiveness and the need for advanced data harmonization techniques to handle disparities during preprocessing and modeling [55]. The problems of multimodal integration include domain adaptation strategies for unifying feature spaces and temporal synchronization of modalities (e.g., aligning fMRI with behavioral data). Compatibility needs to be ensured by utilizing complex preparation techniques like z-score normalization and data fusion algorithms like Transformer-based fusion or Canonical Correlation Analysis (CCA).
4.4 Alignment with Clinical Diagnostic Assumptions and Practices
One of the critical considerations in developing ML models for ASD diagnosis is their alignment with established clinical diagnostic assumptions and practices. For AI-based diagnostic tools to be clinically viable, they must demonstrate high accuracy and be consistent with healthcare professionals’ diagnostic criteria and clinical judgment. Many studies have emphasized the importance of ensuring that machine learning models reflect the multifaceted nature of ASD diagnosis, which includes behavioral, neurocognitive, and socio-communicative dimensions [76]. Due to the sensitive nature of ASD diagnostic data, particularly in the areas of neuroimaging and genetics, ethical issues such as algorithmic bias, data anonymization, informed consent, and model responsibility need to be systematically addressed. For XAI approaches to ensure ethical compliance, fairness-aware training procedures and open auditing techniques are essential.
The computational expense of models like SHAP may limit deployment in real-time diagnostic systems. Enhancing the connection between clinicians and AI requires the development of user-friendly interfaces and training materials that enable the traceability and actionability of model decisions.
Integrating these dimensions into AI models is essential for ensuring that these models do not simply focus on specific features that may be biased or limited in scope. For example, models that rely solely on behavioral markers may overlook neurobiological or cognitive factors, leading to incomplete or misleading diagnoses. Despite the possibility of iterative expert feedback and refinement of ML models, ethical and regulatory aspects of implementing AI in medical settings are critical for data privacy and model accountability [31]. The foregoing discussions on the included studies highlight the need for transparency and accountability in developing and using AI models, ensuring clinicians and patients trust the technology and its outputs.
5. Conclusions
Based on a systematic literature review, the present study provides an extensive synthesis of XAI and ML applications for ASD assessment. The procedure adopted for the study is PRISMA guidelines, which involved a rigorous methodology, from extracting related records from assorted scientific databases to selecting the thirty-four (34) included studies and extracting the most pertinent parameters from the literature. A descriptive analysis of the literature revealed a trend toward integrating XAI into ASD classification models, driven by the need for interpretability and transparency in the clinical assessment of ASD. Thematic synthesis of the literature highlighted major areas based on the most popular ML models utilized, the different explainability approaches employed, the efficacy of multimodal data integration, the interpretations of the XAI and ML evaluation metrics, and the alignment of all these with fundamental clinical diagnostic practices. Accordingly, SVM, DNN, and RF are found to be the most popular models integrated into the XAI approaches due to their unique advantages, from SVM’s robustness in high-dimensional data to DNN’s capability to capture complex nonlinear relationships despite the inherent challenge of interpretability. Explainability approaches such as SHAP, LIME, and Grad-CAM were crucial in translating ML outputs into promising insights that clinicians could trust, assisting in the creation of proper clinical instruments. Additionally, proposals for integrating multimodal data, including behavioral assessments and neuroimaging, have shown promise in providing a comprehensive diagnostic perspective and enhancing model robustness. LIME excels in local explanation fidelity but may be unstable across samples, whereas SHAP offers global interpretability with robust feature attribution. LIME provides case-specific insights in behavioral screening, but SHAP's global significance metrics in neuroimaging better match biomarkers and ROIs. Visual tracking of performance and statistical comparison across ML classifiers used in ASD diagnosis could be supported by tools such as ClassifDAG. These would facilitate the standardization of performance reviews beyond discrete accuracy reports.
However, several limitations were identified. Notably, while XAI methods increase interpretability, balancing diagnostic accuracy with model transparency remains challenging, especially with complex models like DNNs. In addition, although multimodal data approaches improve diagnostic performance, challenges in data harmonization and computational demands may hinder their clinical scalability. Future studies need to focus on the following areas: (i) creating XAI tools specifically for the behavioral and neuroimaging domains; (ii) creating benchmark datasets with clinician feedback annotated; (iii) carrying out performance tracking longitudinal studies; and (iv) incorporating patient-centric feedback.
In essence, the present study highlights the promising role of XAI in making ML-based algorithms for ASD assessment clinically viable while suggesting the need for further research to overcome the most pertinent limitations on generalizability, interpretability, and real-life implementations as well as how mitigating these challenges could pave the way to accurate, transparent, and robust application of AI-supported tools in ASD diagnostics.
Author Contributions
Dr. Abdulmalik Ahmad Lawan oversaw conceptualization, methodology, project management, and supervision. Mrs. Hauwa Zakariyya Abdullahi, Mr. Abdullahi Yunusa Abdullahi, and Dr. Sadiya Tahir carried out methodology implementation, data collection, visualization, and drafting under his guidance.
Competing Interests
The authors have declared that no competing interests exist.
References
- Newschaffer CJ, Croen LA, Daniels J, Giarelli E, Grether JK, Levy SE, et al. The epidemiology of autism spectrum disorders. Annu Rev Public Health. 2007; 28: 235-258. [CrossRef] [Google scholar] [PubMed]
- Waterhouse L. Heterogeneity thwarts autism explanatory power: A proposal for endophenotypes. Front Psychiatry. 2022; 13: 947653. [CrossRef] [Google scholar] [PubMed]
- Hassan MM, Mokhtar HM. Investigating autism etiology and heterogeneity by decision tree algorithm. Inform Med Unlocked. 2019; 16: 100215. [CrossRef] [Google scholar]
- Esler AN, Bal VH, Guthrie W, Wetherby A, Weismer SE, Lord C. The autism diagnostic observation schedule, toddler module: Standardized severity scores. J Autism Dev Disord. 2015; 45: 2704-2720. [CrossRef] [Google scholar] [PubMed]
- Hus V, Lord C. The autism diagnostic observation schedule, module 4: Revised algorithm and standardized severity scores. J Autism Dev Disord. 2014; 44: 1996-2012. [CrossRef] [Google scholar] [PubMed]
- Kim SH, Lord C. Autism diagnostic interview, revised. In: Encyclopedia of clinical neuropsychology. Cham: Springer; 2017. pp. 1-3. [CrossRef] [Google scholar]
- Becker MM, Wagner MB, Bosa CA, Schmidt C, Longo D, Papaleo C, et al. Translation and validation of Autism Diagnostic Interview-Revised (ADI-R) for autism diagnosis in Brazil. Arq Neuropsiquiatr. 2012; 70: 185-190. [CrossRef] [Google scholar] [PubMed]
- Ruparelia K, Abubakar A, Badoe E, Bakare M, Visser K, Chugani DC, et al. Autism spectrum disorders in Africa: Current challenges in identification, assessment, and treatment: A report on the International Child Neurology Association Meeting on ASD in Africa, Ghana, April 3-5, 2014. J Child Neurol. 2016; 31: 1018-1026. [CrossRef] [Google scholar] [PubMed]
- Durkin MS, Elsabbagh M, Barbaro J, Gladstone M, Happe F, Hoekstra RA, et al. Autism screening and diagnosis in low resource settings: Challenges and opportunities to enhance research and services worldwide. Autism Res. 2015; 8: 473-476. [CrossRef] [Google scholar] [PubMed]
- Bakare MO, Frazier TW, Karpur A, Abubakar A, Nyongesa MK, Mwangi PM, et al. Brief report: Validity and reliability of the Nigerian autism screening questionnaire. Autism. 2022; 26: 1581-1590. [CrossRef] [Google scholar] [PubMed]
- Cavus N, Lawan AA, Ibrahim Z, Dahiru A, Tahir S, Abdulrazak UI, et al. A systematic literature review on the application of machine-learning models in behavioral assessment of autism spectrum disorder. J Pers Med. 2021; 11: 299. [CrossRef] [Google scholar] [PubMed]
- Lawan AA, Cavus N, Yunusa RI, Abdulrazak UI, Tahir S. Fundamentals of machine-learning modeling for behavioral screening and diagnosis of autism spectrum disorder. In: Neural engineering techniques for autism spectrum disorder. Academic Press; 2023. pp. 253-268. [CrossRef] [Google scholar]
- Lawan AA, Cavus N. A clinical validity-preserving machine learning approach for behavioral assessment of autism spectrum disorder. OBM Neurobiol. 2022; 6: 138. [CrossRef] [Google scholar]
- Bone D, Bishop SL, Black MP, Goodwin MS, Lord C, Narayanan SS. Use of machine learning to improve autism screening and diagnostic instruments: Effectiveness, efficiency, and multi-instrument fusion. J Child Psychol Psychiatry. 2016; 57: 927-937. [CrossRef] [Google scholar] [PubMed]
- Parlett-Pelleriti CM, Stevens E, Dixon D, Linstead EJ. Applications of unsupervised machine learning in autism spectrum disorder research: A review. Rev J Autism Dev Disord. 2023; 10: 406-421. [CrossRef] [Google scholar]
- Wang H, Jing H, Yang J, Liu C, Hu L, Tao G, et al. Identifying autism spectrum disorder from multi-modal data with privacy-preserving. NPJ Ment Health Res. 2024; 3: 15. [CrossRef] [Google scholar] [PubMed]
- Alkahtani H, Aldhyani TH, Alzahrani MY. Deep learning algorithms to identify autism spectrum disorder in children-based facial landmarks. Appl Sci. 2023; 13: 4855. [CrossRef] [Google scholar]
- Heinsfeld AS, Franco AR, Craddock RC, Buchweitz A, Meneguzzi F. Identification of autism spectrum disorder using deep learning and the ABIDE dataset. NeuroImage Clin. 2018; 17: 16-23. [CrossRef] [Google scholar] [PubMed]
- Parikh MN, Li H, He L. Enhancing diagnosis of autism with optimized machine learning models and personal characteristic data. Front Comput Neurosci. 2019; 13: 9. [CrossRef] [Google scholar] [PubMed]
- Beltramin D, Lamas E, Bousquet C. Ethical issues in the utilization of black boxes for artificial intelligence in medicine. In: Advances in informatics, management and technology in healthcare. IOS Press; 2022. pp. 249-252. [CrossRef] [Google scholar] [PubMed]
- Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A review of machine learning interpretability methods. Entropy. 2020; 23: 18. [CrossRef] [Google scholar] [PubMed]
- Zednik C. Solving the black box problem: A normative framework for explainable artificial intelligence. Philos Technol. 2021; 34: 265-288. [CrossRef] [Google scholar]
- Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf Fusion. 2022; 77: 29-52. [CrossRef] [Google scholar] [PubMed]
- Biswas M, Kaiser MS, Mahmud M, Al Mamun S, Hossain MS, Rahman MA. An XAI based autism detection: The context behind the detection. In: International Conference on Brain Informatics. Cham: Springer International Publishing; 2021. pp. 448-459. [CrossRef] [Google scholar]
- De Belen RA, Bednarz T, Sowmya A. Eyexplain autism: Interactive system for eye tracking data analysis and deep neural network interpretation for autism spectrum disorder diagnosis. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 2021. doi: 10.1145/3411763.3451784. [CrossRef] [Google scholar]
- Leroy G, Andrews JG, KeAlohi-Preece M, Jaswani A, Song H, Galindo MK, et al. Transparent deep learning to identify autism spectrum disorders (ASD) in EHR using clinical notes. J Am Med Inform Assoc. 2024; 31: 1313-1321. [CrossRef] [Google scholar] [PubMed]
- Ali S, Akhlaq F, Imran AS, Kastrati Z, Daudpota SM, Moosa M. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Comput Biol Med. 2023; 166: 107555. [CrossRef] [Google scholar] [PubMed]
- Hulsen T. Explainable artificial intelligence (XAI): Concepts and challenges in healthcare. AI. 2023; 4: 652-666. [CrossRef] [Google scholar]
- Samek W, Wiegand T, Müller KR. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv. 2017. doi: 10.48550/arXiv.1708.08296. [Google scholar]
- Omisade O, Gegov A, Zhou SM, Good A, Tryfona C, Sengar SS, et al. Explainable artificial intelligence and mobile health for treating eating disorders in young adults with autism spectrum disorder based on the theory of change: A mixed method protocol. In: International Conference on Frontiers of Intelligent Computing: Theory and Applications. Singapore: Springer Nature Singapore; 2023. pp. 31-44. [CrossRef] [Google scholar]
- Mahmud M, Kaiser MS, Rahman MA, Wadhera T, Brown DJ, Shopland N, et al. Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In: International Conference on Human-Computer Interaction. Cham: Springer International Publishing; 2022. pp. 356-370. [CrossRef] [Google scholar]
- Eslami T, Almuqhim F, Raiker JS, Saeed F. Machine learning methods for diagnosing autism spectrum disorder and attention-deficit/hyperactivity disorder using functional and structural MRI: A survey. Front Neuroinf. 2021; 14: 575999. [CrossRef] [Google scholar] [PubMed]
- Dcouto SS, Pradeepkandhasamy J. Multimodal deep learning in early autism detection—Recent advances and challenges. Eng Proc. 2024; 59: 205. [CrossRef] [Google scholar]
- Marey A, Arjmand P, Alerab AD, Eslami MJ, Saad AM, Sanchez N, et al. Explainability, transparency and black box challenges of AI in radiology: Impact on patient care in cardiovascular radiology. Egypt J Radiol Nucl Med. 2024; 55: 183. [CrossRef] [Google scholar]
- Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. 2020; 58: 82-115. [CrossRef] [Google scholar]
- Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): Toward medical xai. IEEE Trans Neural Netw Learn Syst. 2020; 32: 4793-4813. [CrossRef] [Google scholar] [PubMed]
- Gerlings J, Jensen MS, Shollo A. Explainable AI, but explainable to whom? An exploratory case study of xAI in healthcare. In: Handbook of Artificial Intelligence in Healthcare: Vol 2: Practicalities and Prospects. Cham: Springer International Publishing; 2021. pp. 169-198. [CrossRef] [Google scholar]
- Joudar SS, Albahri AS, Hamid RA, Zahid IA, Alqaysi ME, Albahri OS, et al. Artificial intelligence-based approaches for improving the diagnosis, triage, and prioritization of autism spectrum disorder: A systematic review of current trends and open issues. Artif Intell Rev. 2023; 56: 53-117. [CrossRef] [Google scholar]
- Viswan V, Shaffi N, Mahmud M, Subramanian K, Hajamohideen F. Explainable artificial intelligence in Alzheimer’s disease classification: A systematic review. Cogn Comput. 2024; 16: 1-44. [CrossRef] [Google scholar]
- Vimbi V, Shaffi N, Mahmud M. Interpreting artificial intelligence models: A systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Inform. 2024; 11: 10. [CrossRef] [Google scholar] [PubMed]
- Kong X, Liu S, Zhu L. Toward Human-centered XAI in Practice: A survey. Mach Intell Res. 2024; 21: 740-770. [CrossRef] [Google scholar]
- Moher D, Liberati A, Tetzlaff J, Altman DG, Antes G, Atkins D, et al. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009; 6: e1000097. [CrossRef] [Google scholar] [PubMed]
- Al-Jawahiri R, Milne E. Resources available for autism research in the big data era: A systematic review. PeerJ. 2017; 5: e2880. [CrossRef] [Google scholar] [PubMed]
- Silva-Calpa GF, Raposo AB, Ortega FR. Collaboration support in co-located collaborative systems for users with autism spectrum disorders: A systematic literature review. Int J Hum Comput Interact. 2021; 37: 15-35. [CrossRef] [Google scholar]
- Lanciano T, Bonchi F, Gionis A. Explainable classification of brain networks via contrast subgraphs. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020. doi: 10.1145/3394486.3403383. [CrossRef] [Google scholar]
- Zhang M, Ma Y, Zheng L, Wang Y, Liu Z, Ma J, et al. An explainable diagnostic method for autism spectrum disorder using neural network. J Inf Sci Eng. 2021; 37: 347-363. [Google scholar]
- Eslami T, Raiker JS, Saeed F. Explainable and scalable machine learning algorithms for detection of autism spectrum disorder using fMRI data. In: Neural engineering techniques for autism spectrum disorder. Academic Press; 2021. pp. 39-54. [CrossRef] [Google scholar]
- Itani S, Thanou D. Combining anatomical and functional networks for neuropathology identification: A case study on autism spectrum disorder. Med Image Anal. 2021; 69: 101986. [CrossRef] [Google scholar] [PubMed]
- Liang S, Sabri AQ, Alnajjar F, Loo CK. Autism spectrum self-stimulatory behaviors classification using explainable temporal coherency deep features and svm classifier. IEEE Access. 2021; 9: 34264-34275. [CrossRef] [Google scholar]
- Sevilla J, Samper JJ, Herrera G, Fernández M. SMART-ASD, model and ontology definition: A technology recommendation system for people with autism and/or intellectual disabilities. Int J Metadata Semant Ontol. 2018; 13: 166-178. [CrossRef] [Google scholar]
- Supekar K, Ryali S, Yuan R, Kumar D, de Los Angeles C, Menon V. Robust, generalizable, and interpretable artificial intelligence–derived brain fingerprints of autism and social communication symptom severity. Biol Psychiatry. 2022; 92: 643-653. [CrossRef] [Google scholar] [PubMed]
- Supekar K, de Los Angeles C, Ryali S, Cao K, Ma T, Menon V. Deep learning identifies robust gender differences in functional brain organization and their dissociable links to clinical symptoms in autism. Br J Psychiatry. 2022; 220: 202-209. [CrossRef] [Google scholar] [PubMed]
- Magboo MS, Magboo VP. Explainable AI for autism classification in children. In: Agents and Multi-Agent Systems: Technologies and Applications 2022. Singapore: Springer Nature Singapore; 2022. pp. 195-205. [CrossRef] [Google scholar]
- Garg A, Parashar A, Barman D, Jain S, Singhal D, Masud M, et al. Autism spectrum disorder prediction by an explainable deep learning approach. Comput Mater Contin. 2022; 71: 1459-1471. [CrossRef] [Google scholar]
- Hilal AM, Issaoui I, Obayya M, Al-Wesabi FN, Nemri N, Hamza MA, et al. Modeling of explainable artificial intelligence for biomedical mental disorder diagnosis. Comput Mater Contin. 2022; 71: 3853-3867. [CrossRef] [Google scholar]
- Chen Y, Liu A, Fu X, Wen J, Chen X. An invertible dynamic graph convolutional network for multi-Center ASD classification. Front Neurosci. 2022; 15: 828512. [CrossRef] [Google scholar] [PubMed]
- Alam MS, Rashid MM, Faizabadi AR, Mohd Zaki HF, Alam TE, Ali MS, et al. Efficient deep learning-based data-centric approach for autism spectrum disorder diagnosis from facial images using explainable AI. Technologies. 2023; 11: 115. [CrossRef] [Google scholar]
- Kang H, Yang M, Kim GH, Lee TS, Park S. DeepASD: Facial image analysis for autism spectrum diagnosis via explainable artificial intelligence. Proceedings of the 2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN); 2023 July 4-7; Paris, France. Piscataway, NJ: IEEE. [CrossRef] [Google scholar]
- Wang C, Qi Y, Chen Z. Explainable Gated Recurrent Unit to explore the effect of co-exposure to multiple air pollutants and meteorological conditions on mental health outcomes. Environ Int. 2023; 171: 107689. [CrossRef] [Google scholar] [PubMed]
- Adilakshmi J, Vinoda Reddy G, Nidumolu KD, Cosme Pecho RD, Pasha MJ. A medical diagnosis system based on explainable artificial intelligence: Autism spectrum disorder diagnosis. Int J Intell Syst Appl Eng. 2023; 11: 385-402. [Google scholar]
- Torres JM, Medina-DeVilliers S, Clarkson T, Lerner MD, Riccardi G. Evaluation of interpretability for deep learning algorithms in EEG emotion recognition: A case study in autism. Artif intell Med. 2023; 143: 102545. [CrossRef] [Google scholar] [PubMed]
- Rodriguez U, Prieto JC, Styner M. IcoConv: Explainable brain cortical surface analysis for ASD classification. In: International Workshop on Shape in Medical Imaging. Cham: Springer Nature Switzerland; 2023. pp. 248-258. [CrossRef] [Google scholar] [PubMed]
- Kainer D, Templeton AR, Prates ET, Jacboson D, Allan ER, Climer S, et al. Structural variants identified using non-Mendelian inheritance patterns advance the mechanistic understanding of autism spectrum disorder. Hum Genet Genom Adv. 2023; 4: 100150. [CrossRef] [Google scholar] [PubMed]
- Paolucci C, Giorgini F, Scheda R, Alessi FV, Diciotti S. Early prediction of autism spectrum disorders through interaction analysis in home videos and explainable artificial intelligence. Comput Hum Behav. 2023; 148: 107877. [CrossRef] [Google scholar]
- Saakyan W, Norden M, Herrmann L, Kirsch S, Lin M, Guendelman S, et al. On scalable and interpretable autism detection from social interaction behavior. Proceedings of the 11th International Conference on Affective Computing and Intelligent Interaction (ACII); 2023 September 10-13; Cambridge, MA, USA. Piscataway, NJ: IEEE. [CrossRef] [Google scholar]
- Xu K, Sun Z, Qiao Z, Chen A. Diagnosing autism severity associated with physical fitness and gray matter volume in children with autism spectrum disorder: Explainable machine learning method. Complement Ther Clin Pract. 2024; 54: 101825. [CrossRef] [Google scholar] [PubMed]
- Jung W, Jeon E, Kang E, Suk HI. EAG-RS: A novel explainability-guided ROI-selection framework for ASD Diagnosis via inter-regional relation learning. IEEE Trans Med Imaging. 2023; 43: 1400-1411. [CrossRef] [Google scholar] [PubMed]
- Bargagna F, De Santi LA, Santarelli MF, Positano V, Vanello N. Bayesian XAI methods towards a robustness-centric approach to deep learning: An ABIDE I study. Proceedings of the 2024 IEEE International Symposium on Medical Measurements and Applications (MeMeA); 2024 June 26-28; Eindhoven, Netherlands. Piscataway, NJ: IEEE. [CrossRef] [Google scholar]
- Xu J, Bian Q, Li X, Zhang A, Ke Y, Qiao M, et al. Contrastive graph pooling for explainable classification of brain networks. IEEE Trans Med Imaging. 2024; 43: 3292-3305. [CrossRef] [Google scholar] [PubMed]
- Jeon I, Kim M, So D, Kim EY, Nam Y, Kim S, et al. Reliable autism spectrum disorder diagnosis for pediatrics using machine learning and explainable AI. Diagnostics. 2024; 14: 2504. [CrossRef] [Google scholar] [PubMed]
- Colonnese F, Di Luzio F, Rosato A, Panella M. Enhancing autism detection through gaze analysis using eye tracking sensors and data attribution with distillation in deep neural networks. Sensors. 2024; 24: 7792. [CrossRef] [Google scholar] [PubMed]
- Nahas LD, Datta A, Alsamman AM, Adly MH, Al-Dewik N, Sekaran K, et al. Genomic insights and advanced machine learning: Characterizing autism spectrum disorder biomarkers and genetic interactions. Metab Brain Dis. 2024; 39: 29-42. [CrossRef] [Google scholar] [PubMed]
- Kim YG, Ravid O, Zheng X, Kim Y, Neria Y, Lee S, et al. Explaining deep learning-based representations of resting state functional connectivity data: Focusing on interpreting nonlinear patterns in autism spectrum disorder. Front Psychiatry. 2024; 15: 1397093. [CrossRef] [Google scholar] [PubMed]
- Albahri AS, Joudar SS, Hamid RA, Zahid IA, Alqaysi ME, Albahri OS, et al. Explainable artificial intelligence multimodal of autism triage levels using fuzzy approach-based multi-criteria decision-making and LIME. Int J Fuzzy Syst. 2024; 26: 274-303. [CrossRef] [Google scholar]
- Alzakari SA, Allinjawi A, Aldrees A, Zamzami N, Umer M, Innab N, et al. Early detection of autism spectrum disorder using explainable AI and optimized teaching strategies. J Neurosci Methods. 2025; 413: 110315. [CrossRef] [Google scholar] [PubMed]
- Rajagopalan SS, Zhang Y, Yahia A, Tammimies K. Machine learning prediction of autism spectrum disorder from a minimal set of medical and background information. JAMA Netw Open. 2024; 7: e2429229. [CrossRef] [Google scholar] [PubMed]
- Vidya S, Gupta K, Aly A, Wills A, Ifeachor E, Shankar R. Explainable AI for autism diagnosis: Identifying critical brain regions using fMRI data. arXiv. 2024. doi: 10.48550/arXiv.2409.15374. [Google scholar]
- Kong Y, Gao J, Xu Y, Pan Y, Wang J, Liu J. Classification of autism spectrum disorder by combining brain connectivity and deep neural network classifier. Neurocomputing. 2019; 324: 63-68. [CrossRef] [Google scholar]
- Ma Y, Guo G. Support vector machines applications. Cham, Switzerland: Springer International Publishing; 2014. [CrossRef] [Google scholar]
- Wingfield B, Miller S, Yogarajah P, Kerr D, Gardiner B, Seneviratne S, et al. A predictive model for paediatric autism screening. Health Inform J. 2020; 26: 2538-2553. [CrossRef] [Google scholar] [PubMed]
- Goel N, Grover B, Gupta D, Khanna A, Sharma M. Modified grasshopper optimization algorithm for detection of autism spectrum disorder. Phys Commun. 2020; 41: 101115. [CrossRef] [Google scholar]
- Bracher-Smith M, Crawford K, Escott-Price V. Machine learning for genetic prediction of psychiatric disorders: A systematic review. Mol Psychiatry. 2021; 26: 70-79. [CrossRef] [Google scholar] [PubMed]
- Ganatra D, Nilkant D. Ensemble methods to improve accuracy of a classifier. Int J Adv Trends Comput Sci Eng. 2020; 9: 3434-3439. [CrossRef] [Google scholar]
- Rigatti SJ. Random forest. J Insur Med. 2017; 47: 31-39. [CrossRef] [Google scholar] [PubMed]
- Cihan S. Autism_Image_Data [Internet]. Kaggle; [cited date 2024 December 25]. Available from: https://www.kaggle.com/datasets/cihan063/autism-image-data. [PubMed]
- Thabtah FF. Autistic spectrum disorder screening data for children [Internet]. UCI Machine Learning Repository; 2017 [cited date 2024 December 25]. Available from: https://archive.ics.uci.edu/dataset/419/autistic+spectrum+disorder+screening+data+for+children.
- Duan H, Zhai G, Min X, Che Z, Fang Y, Yang X, et al. A dataset of eye movements for the children with autism spectrum disorder. Proceedings of the 10th ACM Multimedia Systems Conference. 2019. doi: 10.1145/3304109.3325818. [CrossRef] [Google scholar]
- Torres JM, Clarkson T, Hauschild KM, Luhmann CC, Lerner MD, Riccardi G. Facial emotions are accurately encoded in the neural signal of those with autism spectrum disorder: A deep learning approach. Biol Psychiatry Cogn Neurosci Neuroimaging. 2022; 7: 688-695. [CrossRef] [Google scholar] [PubMed]
- Gazestani VH, Pramparo T, Nalabolu S, Kellman BP, Murray S, Lopez L, et al. A perturbed gene network containing PI3K–AKT, RAS–ERK and WNT–β-catenin pathways in leukocytes is linked to ASD genetics and symptom severity. Nat Neurosci. 2019; 22: 1624-1634. [CrossRef] [Google scholar] [PubMed]
- Luo R, Sanders SJ, Tian Y, Voineagu I, Huang N, Chu SH, et al. Genome-wide transcriptome profiling reveals the functional impact of rare de novo and recurrent CNVs in autism spectrum disorders. Am J Hum Genet. 2012; 91: 38-55. [CrossRef] [Google scholar] [PubMed]
- Hazlett HC, Gu H, McKinstry RC, Shaw DW, Botteron KN, Dager SR, et al. Brain volume findings in 6-month-old infants at high familial risk for autism. Am J Psychiatry. 2012; 169: 601-608. [CrossRef] [Google scholar] [PubMed]
- Joudar SS, Albahri AS, Hamid RA. Intelligent triage method for early diagnosis autism spectrum disorder (ASD) based on integrated fuzzy multi-criteria decision-making methods. Inform Med Unlocked. 2023; 36: 101131. [CrossRef] [Google scholar]
- ASD Screening Data for Toddlers in Saudi Arabia [Internet]. Kaggle; [cited date 2024 December 25]. Available from: https://www.kaggle.com/datasets/asdpredictioninsaudi/asd-screening-data-for-toddlers-in-saudi-arabia. [PubMed]
- Rajagopalan S, Dhall A, Goecke R. Self-stimulatory behaviours in the wild for autism diagnosis. Proceedings of the IEEE International Conference on Computer Vision Workshops; 2013 December 2-8; Sydney, Australia. Piscataway, NJ: IEEE. [CrossRef] [Google scholar]
- Di Martino A, Yan CG, Li Q, Denio E, Castellanos FX, Alaerts K, et al. The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol Psychiatry. 2014; 19: 659-667. [CrossRef] [Google scholar] [PubMed]
- Di Martino A, O’connor D, Chen B, Alaerts K, Anderson JS, Assaf M, et al. Enhancing studies of the connectome in autism using the autism brain imaging data exchange II. Sci Data. 2017; 4: 170010. [CrossRef] [Google scholar] [PubMed]
- Beglinger LJ, Smith TH. A review of subtyping in autism and proposed dimensional classification model. J Autism Dev Disord. 2001; 31: 411-422. [CrossRef] [Google scholar] [PubMed]









