OBM Neurobiology

(ISSN 2573-4407)

OBM Neurobiology is an international peer-reviewed Open Access journal published quarterly online by LIDSEN Publishing Inc. By design, the scope of OBM Neurobiology is broad, so as to reflect the multidisciplinary nature of the field of Neurobiology that interfaces biology with the fundamental and clinical neurosciences. As such, OBM Neurobiology embraces rigorous multidisciplinary investigations into the form and function of neurons and glia that make up the nervous system, either individually or in ensemble, in health or disease. OBM Neurobiology welcomes original contributions that employ a combination of molecular, cellular, systems and behavioral approaches to report novel neuroanatomical, neuropharmacological, neurophysiological and neurobehavioral findings related to the following aspects of the nervous system: Signal Transduction and Neurotransmission; Neural Circuits and Systems Neurobiology; Nervous System Development and Aging; Neurobiology of Nervous System Diseases (e.g., Developmental Brain Disorders; Neurodegenerative Disorders).

OBM Neurobiology publishes a variety of article types (Original Research, Review, Communication, Opinion, Comment, Conference Report, Technical Note, Book Review, etc.). Although the OBM Neurobiology Editorial Board encourages authors to be succinct, there is no restriction on the length of the papers. Authors should present their results in as much detail as possible, as reviewers are encouraged to emphasize scientific rigor and reproducibility.

Publication Speed (median values for papers published in 2024): Submission to First Decision: 7.6 weeks; Submission to Acceptance: 13.6 weeks; Acceptance to Publication: 6 days (1-2 days of FREE language polishing included)

Open Access Original Research

Interactive and Deep Learning-Powered EEG-BCI for Wrist Rehabilitation: A Game-based Prototype Study

Ebru Sayilgan ‡,*

  1. Izmir University of Economics, Fevzi Cakmak, Sakarya St. No:156, 35330, Izmir, Turkey

‡ Current Affiliation: Izmir University of Economics.

Correspondence: Ebru Sayilgan

Academic Editor: Mahsa Zeynali

Special Issue: Applications of Brain–Computer Interface (BCI) and EEG Signals Analysis

Received: August 14, 2025 | Accepted: September 17, 2025 | Published: September 24, 2025

OBM Neurobiology 2025, Volume 9, Issue 3, doi:10.21926/obm.neurobiol.2503302

Recommended citation: Sayilgan E. Interactive and Deep Learning-Powered EEG-BCI for Wrist Rehabilitation: A Game-based Prototype Study. OBM Neurobiology 2025; 9(3): 302; doi:10.21926/obm.neurobiol.2503302.

© 2025 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.

Abstract

Motor deficits induced by neurological disorders impose a severe impact on activities of daily life. Conventional rehabilitation practices necessitate ongoing clinical supervision, which is costly and inaccessible. EEG-based brain-computer interface (BCI) systems offer a viable solution by facilitating neurorehabilitation through the direct interpretation of brain signals. Nonetheless, current systems are confronted with issues of real-time control, portability, and classification precision. The paper describes a novel EEG-controlled wrist rehabilitation robot with deep learning-based real-time motor intention classification. EEG signals were recorded with OpenBCI, preprocessed with noise filtering, and converted into time-frequency representations. A GoogLeNet-inspired convolutional neural network (CNN) was trained for the classification of wrist movement intentions. SolidWorks was utilized for designing the mechanical structure, which was verified using finite element analysis (FEA). An Nvidia-based microcontroller was employed for controlling servo motors, while an inertial measurement unit (IMU) was incorporated into the system for enabling precise and agile movement using feedback control. The system proposed in this work attained an EEG classification accuracy of 90.24%, which was well above conventional feature-based classifiers. The 2-degree-of-freedom (2-DoF) robotic system with a lightweight structure enabled controlled wrist flexion, extension, and radial/ulnar deviation movements. Structural validation by the FEA assured mechanical stability against operational loads. The system proved to be feasible for real-time, user-intended motion control. The proposed study offers a cost-effective, portable, and deep learning-based EEG-BCI rehabilitation robot, rendering a possible solution to neurorehabilitation. The high classification accuracy and real-time control features of the system highlight the potential for personalized rehabilitation. Future endeavors will focus on the development of deeper learning frameworks, the advancement of motor control strategies, and the implementation of extended clinical trials.

Keywords

EEG; brain-computer interface; rehabilitation robotics; deep learning; motor intention classification; game-based paradigm

1. Introduction

Motor deficits due to disorders such as spinal cord injuries, carpal tunnel syndrome, and osteoarthritis significantly diminish patients' quality of life. They restrict their capacity for carrying out activities of daily living [1,2,3,4,5,6,7,8]. Conventional rehabilitation approaches tend to rely on repetitive training sessions that therapists supervise in the clinic. Although practical, such methods are also time-consuming, costly, and not always accessible to patients, particularly those with mobility issues or residing in remote locations [2,9].

To address these issues, brain–computer interface (BCI) technology has emerged as a promising non-invasive approach that translates brain activity into control signals for external devices. Seminal work established EEG-based BCIs as viable channels for communication and control in people with severe motor impairments [10]. Among non-invasive BCIs, EEG systems are prominent due to portability and low cost, and they have increasingly been coupled to robotic devices for rehabilitation. A systematic review on BCI-robot systems for hand rehabilitation concluded that EEG-based BCIs are "a promising approach for rehabilitation post-stroke," profiling both technical and clinical aspects [11]. In parallel, device-level studies demonstrate practical control, as seen in attention-controlled wrist rehabilitation using a low-cost EEG sensor that enables flexion/extension and radial/ulnar deviation, underscoring the feasibility of EEG-driven upper-limb therapy [12]. More recently, deep-learning-enabled EEG control has advanced the field; Mukherjee and Roy reported an EEG sensor–driven assistive device for elbow and finger rehabilitation using deep learning, reflecting the broader shift toward AI-powered upper-limb solutions [13,14].

However, this field still faces several key challenges: the low signal-to-noise ratio and susceptibility to artifacts in EEG data hinder effective motor intention classification. Existing BCI systems have limited performance and control accuracy, affecting the reliability of robotic movements. The lack of portability and high production costs also limit widespread use in home rehabilitation. Additionally, there is inadequate integration of sophisticated AI algorithms and intense learning models for effective EEG signal classification in real-world systems [3,7,15].

To overcome these difficulties, this research proposes a novel EEG-based wrist rehabilitation robot. The robot integrates an accurate deep learning-based motor intention classification system with an ergonomic, lightweight design. It employs a GoogLeNet-based convolution neural network (CNN) on time-frequency analysis of EEG signals to accurately classify wrist movement intentions. A two-degree-of-freedom (2-DoF) mechanism is provided in the device to facilitate wrist flexion/extension and radial/ulnar deviation. It is fabricated through 3D printing to ensure cost-effectiveness and includes an inertial measurement unit (IMU) for real-time feedback and accurate control.

The primary objectives of this study are to develop a portable, low-cost, and easy-to-use EEG-based BCI system for wrist rehabilitation; enhance the classification accuracy of EEG signals through a deep learning model for motor imagery detection; and evaluate the mechanical and functional performance of the proposed system in real-time operation. Through these objectives, the research seeks to offer a practical, scalable, and intelligent rehabilitation solution that bridges clinical efficacy and user accessibility in neurorehabilitation technology.

This research brings the following contributions to EEG-based BCI systems and neurorehabilitation robotics:

  • Unlike many previous studies that used traditional machine learning methods, the present research applies a GoogLeNet-based convolutional neural network (CNN) for wrist motor imagery classification efficiently. The model achieves a high classification accuracy of 90.24%, proving its feasibility for real-time brain-computer interface (BCI) applications.
  • The robot proposed enables wrist flexion/extension and radial/ulnar deviation within comfortable ergonomic ranges. The design, which has been verified using 3D printing and FEA, provides portability, mechanical reliability, and low cost.
  • Instantaneous feedback from an inertial measurement unit (IMU), in fusion with EEG-based classification, improves the precision and reliability of the movements of the wrist.
  • The project proposes a novel game-based protocol for data collection that encourages active engagement of subjects and enhances the quality of motor imagery data collection.
  • Through the use of open-source software (OpenBCI, Arduino, MATLAB) and low-cost hardware (PLA, 3D printing, Nvidia microcontroller), the system provides high-end neurorehabilitation equipment for home and clinical applications. These contributions collectively enhance the EEG-BCI research field by offering a real-time, affordable, and clinically significant rehabilitation solution.

It differs from other works by embedding a real-time deep learning-enabled classification pipeline, a portable and lightweight 2-DoF wrist rehabilitative robot that was tested with finite element simulation and 3D printing, and a game-based experiment setup that was customized to increase participant engagement. While other works focus on self-contained features of EEG-BCI rehabilitative therapy (e.g., offline classification or clinic-grade robots), this work demonstrates a holistic prototype that prioritizes real-time feasibility, user comfort, and home-care rehabilitative capability.

2. Materials and Methods

This section provides a comprehensive description of the proposed wrist rehabilitation robot, detailing its system architecture, including mechanical design, EEG signal acquisition and processing, DL-based classification, and hardware integration. A graphical summary of the system is presented in Figure 1, outlining the key components, including the rehabilitation robot's mechanical structure, EEG signal acquisition setup, classification pipeline, and hardware integration scheme.

Click to view original image

Figure 1 Flowchart of the study.

2.1 System Architecture and Mechanical Design

The designed wrist rehabilitation robot is compact and lightweight to facilitate ease of use and ensure the comfort of the user. The mechanical concept, established with the aid of SolidWorks, incorporates ergonomic guidelines akin to those published by Kim et al. [15] and Patel et al. [9]. A prototype of the device was 3D printed with polylactic acid (PLA) material for flexibility in dealing with diverse wrist sizes and ranges of motion, akin to prior research works [16,17]. Garcia et al. also attest to the efficacy of 3D printing for use in biomedical applications [18,19].

The 2-DoF rehabilitation robot provides flexion/extension and radial/ulnar deviation motion. From the literature, initially, the maximum flexion and maximum extension angles had been calculated as 70° and 60°, respectively, but were lowered to 50° for reasons of safe operation after that. Ulnar and radial deviations were also reduced to the ranges of 30° to 40° and 15° to 20°, respectively [20].

Stress distribution and mechanical stability were analyzed using finite element analysis (FEA) with ANSYS and ABAQUS software, ensuring the components withstand dynamic operational loads. To optimize performance, torque, speed, weight, and size constraints were considered for motor selection [19].

To verify accurate positioning, the measured angular displacement from the IMU was compared against the expected values from the servo motor control system. This feedback mechanism ensures that the rehabilitation robot maintains the intended wrist movement trajectories while minimizing oscillations and deviations [20,21].

2.2 Video Game Description and Data Labeling

The experimental paradigm employed in this study was based on a three-lane platform-style runner game, designed to simulate directional motor intention through virtual movement tasks. In this game environment, the player navigates a character continuously moving forward along a three-lane path (Figure 2). To avoid dynamic obstacles appearing along the route, the player must execute specific actions, each corresponding to a distinct directional command: jumping over obstacles (up), sliding under barriers (down), and switching lanes to the left or right (left/right). These discrete directional actions serve as proxies for imagined motor tasks in the context of EEG data acquisition, enabling the mapping of cognitive motor intentions to well-defined control commands in a brain–computer interface (BCI) framework. Informal participant feedback was collected using a 5-point Likert scale, yielding a mean engagement score of 4.5 ± 0.3. Participants reported increased attention levels while playing the game compared to typical cue-task activities, indicating an improvement in concentration with the game-playing paradigm. Although further validation is required, these confirm the feasibility of collecting EEG-BCI procedures with interesting visual tasks.

Click to view original image

Figure 2 Experimental Procedure.

2.3 EEG Signal Acquisition and Processing

This research analyzes EEG recordings taken from 10 normal subjects (3 females and 7 males) who were aged 18 to 30 years old. They were all right-handed and had no history of neurological and musculoskeletal disorders. The visual integrity of the subjects was taken into consideration; the subjects had normal vision or wore corrective glasses during experiments. Before the collection of the data, the subjects had given informed consent in line with proper research ethics.

EEG recordings were obtained with a 16-channel EEG headset at a 125 Hz sampling frequency. Interference reduction was completed by removing power line noise with a 50 Hz notch filter and by employing a band-pass filter in the range of 0.1–100 Hz to preserve beneficial EEG signal constituents and remove artifacts [6,7].

The experiment included two separate motor imagery tasks presented through interactive video games. Before starting, participants had a chance to explore each game’s main menu. This helped them familiarize themselves with the gameplay mechanics and the motor imagery tasks involved. The experimental setup and signal recording process are illustrated in Figure 3.

Click to view original image

Figure 3 Game-based EEG experimental setup and signal recording using OpenBCI.

During gameplay, a labeling system marked specific frames in the 17th column of the recorded data. This indicated user input actions such as up, down, left, and right, along with the start and stop events of the experiment. Each participant completed five rounds of 30-second gameplay sessions, each followed by a 10-second rest. This process generated a dataset of about 17 feature columns and 25,000-time samples for each game.

EEG signals were processed using the EEGLAB toolbox. The pre-processing steps followed best practices from Wolpaw et al. [6] and Craik et al. [7] to ensure high-quality neural data. The artifact removal process included [22]:

  • Independent Component Analysis (ICA) to eliminate eye movement and muscle artifacts.
  • Adaptive filtering to correct baseline drift.
  • Time-frequency spectral analysis to extract key motor imagery features.

EEG signals were converted into time-frequency spectrum images. This transformation allowed the system to capture dynamic features related to motor intention. Preprocessing followed standard BCI protocols, including ICA-based artifact removal, adaptive baseline correction, and time-frequency feature extraction to optimize motor imagery classification [3,23].

Deep learning classification was carried out using a Convolutional Neural Network (CNN). This type of artificial neural network is designed to process structured spatial data efficiently [22]. Unlike traditional ANNs, CNNs have three-dimensional layers (width, height, and depth) that enable direct feature extraction from input images without extensive manual work [23].

Among different CNN architectures, including AlexNet, SqueezeNet, and ResNet, GoogLeNet was chosen for its computational efficiency and depth, which is 22 layers. The model can extract hierarchical features while reducing the chance of overfitting. GoogLeNet uses inception modules, allowing for multiscale feature extraction within a single convolutional layer, which improves recognition performance in EEG classification tasks.

For this study, a GoogLeNet-based CNN model was developed. The input layer was adjusted to accept 224 × 224 grayscale time-frequency spectrum images from EEG signals. The model was trained with a batch size of 32 over 50 epochs. A learning rate of 0.001 was set, and the Adam optimizer was selected for its effectiveness with sparse gradients typical in EEG data. To avoid overfitting, L2 regularization with a coefficient of 0.0005 and a dropout rate of 0.4 was applied at fully connected layers. The model began with pre-trained ImageNet weights and was then fine-tuned on a custom EEG dataset. Categorical cross-entropy was used as the loss function, and early stopping was implemented with a patience of 10 epochs based on validation loss. All training and evaluation occurred using MATLAB’s Deep Learning Toolbox on a system with an Nvidia GPU for faster computation. A summary of the hyperparameter settings used in this study is provided in Table 1.

Table 1 GoogLeNet model hyperparameters.

To ensure evaluations are independent of subjects, a stratified 10-fold cross-validation method was used. In each fold, 70% of the data was used for training and 30% for validation, with careful shuffling to maintain class balance. This method enhances generalization performance and reduces the risk of overfitting to specific subject patterns.

During training, model performance was tracked using three main metrics: accuracy, precision, and sensitivity. These were calculated based on standard definitions using TP (True Positive), TN (True Negative), FP (False Positive), and FN (False Negative) values:

\[ Sensitivity\,(Recall)\,=\,\frac{TP}{FN\,+\,TP} \tag{1} \]

\[ Precision\,=\,\frac{TP}{TP\,+\,FP} \tag{2} \]

\[ Accuracy\,=\,\frac{TP\,+\,TN}{TP\,+\,FP\,+\,TN\,+\,FN} \tag{3} \]

\[ F1\,-\,Score\,=\,\frac{2\,*\,(Precision\,*\,Recall)}{(Precision\,+\,Recall)} \tag{4} \]

The final performance metrics reported are the averages across all validation folds. This gives a solid estimate of the model’s classification ability.

A stratified 10-fold cross-validation approach was used, ensuring subject-independent evaluation. Each fold was constructed so that no data from a participant in the test set appeared in the training set, thereby minimizing the risk of overfitting.

2.4 Ethics Statement

The study was approved by the Ethics Committee of Izmir University of Economics under the approval number B.30.2.İEÜSB.0.05.05-20-271 dated 19.12.2023. All procedures involving human participants were conducted under the ethical standards of the institutional and/or national research committee, as well as with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

3. Results

3.1 Mechanical Design, Structural Validation, and System Assembly

The rehabilitation system is designed to be portable, user-friendly, and cost-effective. Weighing about 1.5 to 2 kg, the device is lightweight and easy to move, making it ideal for home rehabilitation. Its design adapts to different hand, wrist, and arm sizes, ensuring comfort during therapy. To cut production costs while keeping its strength, we used 3D printing with PLA filament. This approach balances durability and affordability [24,25]. Given these design needs, the final structure shown in Figure 4 is the best option. Finite Element Analysis verified that the system meets strength requirements during expected use.

Click to view original image

Figure 4 (a) Assembly view and (b) FEM analysis results.

To test the material's durability, we calculated the stress-to-yield strength ratio. The tensile strength of ABS filament is 7 MPa, while the maximum stress recorded in testing was 3.21 MPa. The design meets safety standards if:

\[ \mathrm{\sigma} _{\mathrm{y}} (\mathrm{ABS})/\mathrm{\sigma} _{\mathrm{max}}\,>\,1 \tag{5} \]

where σy(ABS) = 7 MPa and σmax = 3.21 MPa. Thus:

\[ 7/3.21\,\approx\,2.18\,>\,1 \tag{6} \]

This shows that the design meets the needed mechanical strength. We chose the motor based on the torque needs to achieve efficiency and smooth operation. We used MATLAB and Simulink for simulating the control system and developing algorithms, and C++ for programming the microcontroller [26].

The final assembly demonstrated that all parts worked well together, with little mechanical interference and smooth wrist movements. The complete assembly of the rehabilitation robot is shown in Figure 4.

Informal ergonomic evaluations were conducted with all participants. Feedback confirmed that the device comfortably accommodated different wrist sizes, and participants reported no discomfort during 20-minute sessions.

3.2 EEG Signal Classification Performance, Control Integration, and Comparative Analysis

On the other hand, the EEG signal classification used the trained GoogLeNet model. This approach was similar to those shown by Raza et al. [3] and Li et al. [5]. The model achieved an initial classification accuracy of 90.24%, as shown in Table 2. It distinguished between wrist movement patterns and set a strong baseline for future improvements.

Table 2 Subject-wise test accuracy, precision, sensitivity, and F1-score performance.

A custom MATLAB-based control algorithm was created to synchronize EEG classification outputs with robotic actuation. The trained CNN model predicted movement intentions, which were sent to the Arduino system to control the servo motors. Closed-loop control strategies ensured precise wrist rehabilitation movements and reduced the chance of misclassified or unintended actions.

In addition to accuracy, precision, and sensitivity, we computed per-class F1-scores and generated a confusion matrix to provide a comprehensive evaluation of classification performance. These metrics are summarized in Table 2, and the confusion matrix is shown in Figure 5.

Click to view original image

Figure 5 Confusion matrix.

In EEG-based rehabilitation systems, signal processing and mechanical response delays can harm the effectiveness of motor training. Delays interfere with the timing between robotic feedback and user intention. To minimize such latency, the system proposed herein employs an optimized signal processing pipeline and a simplified GoogLeNet-based CNN model optimized for fast inference. Furthermore, an Nvidia-based microcontroller directly communicates with servo motors, allowing for low-latency control execution. An inertial measurement unit (IMU) enhances responsiveness with real-time angular feedback for motion correction. Preliminary evaluations indicate that the system's overall latency remains below 300 milliseconds, a response time deemed suitable for real-time neurorehabilitation applications. A detailed latency breakdown indicates that data acquisition introduces ~100 ms delay, preprocessing ~50 ms, CNN inference ~80 ms, and actuation ~50 ms, resulting in a total response time under 300 ms. This analysis supports the feasibility of real-time operation.

To evaluate computational feasibility, we compared the training and inference times of the proposed GoogLeNet model with baseline classifiers. Training time per fold averaged 210 s for GoogLeNet, 85 s for EEGNet, and 30 s for SVM, while inference per sample was 8.5 ms, 7.2 ms, and 12.4 ms, respectively. These results confirm that GoogLeNet provides a balance between high classification accuracy and low-latency performance, making it suitable for real-time rehabilitation applications [27,28].

IMU integration for sensor fusion in EEG-controlled robotic rehabilitation systems has been extensively researched [21]. The integration of wearable EEG devices and IMU feedback enhances motion accuracy and flexibility [27]. The combination of 3D-printed components with embedded actuators and sensors has yielded a complete working rehabilitation device that addresses portability, affordability, and usability requirements.

Table 3 presents a comparison of recent EEG-based BCI research in terms of classification accuracy and signal processing approaches. The accuracy percentages are between 87.5% and 99.17%, indicating advancements in EEG classification algorithms over recent years.

Table 3 Comparison of EEG-based BCI systems and performance evaluation of this study.

Among the reviewed studies, Dandamudi achieved the highest accuracy at 99.17% with a hybrid neural network that included NLP, LSTM, and RBM [34]. In contrast, Yıldırım et al. used a Fast Fourier Transform (FFT)-based method, which yielded an accuracy of 87.5% [33]. This result is lower than that of other deep learning-based approaches.

This study (2025) reached an accuracy of 90.24% with a CNN-based model. The results highlight the effectiveness of the proposed CNN architecture in classifying EEG signals. They show the potential for reliable BCI applications. The findings suggest that deep learning-based methods, especially CNNs, can deliver strong performance while keeping computational efficiency in mind.

4. Discussion

Noteworthy to mention is that such a study enrolled just ten healthy subjects, thus rendering its results preliminary in character. Such findings affirm the validation of proposed implementation feasibility, but these findings permit no clinical extrapolation. Additional studies with neurologically disabled subjects are required for proper therapeutic efficacy assessment.

The mechanical design successfully addresses the primary needs of providing portability, ease of use, and safety in the structure, thereby making the rehabilitation robot suitable for use in home environments. The application of 3D printing technology not only helps in reducing costs but also improves the effectiveness of rapid prototype creation, which is a significant benefit in the production of customized medical devices.

Mechanical analysis corroborated the structural integrity with a safety factor well above the minimum requirement. This result implies that the device can withstand the predicted operational loads without collapsing. Additionally, the use of MATLAB and Simulink in simulation, as well as C++ in programming the microcontroller, enabled proper implementation of system-level control, leading to the smooth motion of the wrist.

The GoogLeNet-based CNN model recorded a consistent classification accuracy for all the participants with an overall mean of 90.24%. This finding supports the model's reliability in classifying intentions for wrist movements. Although the performance of the model is comparable to that of earlier studies, there is also the added benefit of real-time efficiency. Although Dandamudi [34] recorded the highest accuracy at 99.17%, the use of simpler configurations of CNNs in this study provides an acceptable trade-off between accuracy and the capability for real-time operation.

Signal processing and motor response latency are the biggest hurdles to effective BCI-based rehabilitation. To solve this issue, the signal processing pipeline in this study was optimized, and a hardware (Nvidia-based microcontroller) that allows real-time actuation was used. An IMU integration also allowed for improved accuracy in control by way of real-time angular feedback and elimination of error risks.

The performance comparisons in Table 3 indicate the increasing efficacy of deep learning for EEG-based BCI systems. Although approaches such as LSTM and RBM are promising for increasingly complicated tasks, CNN-based architectures are still appealing owing to their trade-off between simplicity, performance, and speed. The findings confirm the adequacy of the design and operation of the suggested system, and it can be considered a model candidate for real-time BCI-driven rehabilitation systems.

5. Conclusions and Future Work

This article presents the design and testing of the EEG-controlled wrist rehabilitation robot for the provision of an affordable and portable neurorehabilitation system. The utilization of EEG-based motor imagery classification, deep learning-based algorithms, and real-time robotic control proves the viability of brain-controlled rehabilitation. Preliminary testing reveals a 90.24% classification accuracy, verifying the system’s viability for practical rehabilitation uses. The system can evolve into a scalable, intelligent, and clinician-friendly system for neurological rehabilitation, rendering brain-controlled rehabilitation technology increasingly mainstream and effective.

Future research will build upon this feasibility study with clearly defined research priorities:

  1. Expanded Participant Cohorts: Future studies will include participants with various neurological disorders to evaluate the clinical applicability and therapeutic effectiveness of the system.
  2. Systematic Ablation Studies: Comparative experiments will be conducted to quantify the contribution of individual system components, including deep learning classifiers, IMU feedback integration, and alternative preprocessing pipelines.
  3. Multimodal Signal Integration: Combining EEG with additional biosignals, such as electromyography (EMG) and functional near-infrared spectroscopy (fNIRS), will be explored to improve robustness and accuracy in motor intention decoding.
  4. Game-based and Immersive Training: Training paradigms will be enhanced through adaptive difficulty levels and immersive virtual reality environments to improve user engagement and motivation.
  5. Extended Home-Based Trials: Long-term usability studies will assess adherence, real-world functionality, and system performance under minimal clinical supervision.

Acknowledgments

The author would like to express sincere appreciation to Abdullah Yiğit Sağlam for his valuable contributions to the signal recording phase and experiment design using Unity. This study was supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under Project No. 123E456.

Author Contributions

The sole author was responsible for the conception, design, data collection, analysis, interpretation, manuscript writing, and final approval of the version to be published.

Funding

The authors declare that no financial support was received for the conduct of the research and/or the preparation of this article.

Competing Interests

The authors have declared that no competing interests exist.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request. Due to ongoing related studies, the raw data will be made publicly available after these studies are completed.

AI-Assisted Technologies Statement

During the preparation of this manuscript, the author used OpenAI ChatGPT (version GPT-4.0) for language refinement and assistance in improving the clarity. The author has reviewed and edited the output and takes full responsibility for the content of this publication.

References

  1. Yamamoto I, Matsui M, Inagawa N, Hachisuka K, Wada F, Hachisuka A, et al. Development of wrist rehabilitation robot and interface system. Technol Health Care. 2015; 24: S27-S32. [CrossRef] [Google scholar] [PubMed]
  2. Bogue R. Rehabilitation robots. Ind Robot. 2018; 45: 301-306. [CrossRef] [Google scholar]
  3. Raza H, Cecotti H, Prasad G. Deep Learning-based Prediction of EEG Motor Imagery for Neuro-Rehabilitation. Proceedings of the International Joint Conference on Neural Networks (IJCNN); 2020 Jul 19-24; Glasgow, UK. Piscataway, NJ: IEEE. [CrossRef] [Google scholar]
  4. Baniqued PD, Stanyer EC, Awais M, Alazmani A, Jackson AE, Mon-Williams MA, et al. Brain–computer interface robotics for hand rehabilitation after stroke: A systematic review. J NeuroEng Rehabil. 2021; 18: 15. [CrossRef] [Google scholar] [PubMed]
  5. Li M, Liang Z, He B, Zhao CG, Yao W, Xu G, et al. Attention-controlled assistive wrist rehabilitation using a low-cost EEG sensor. IEEE Sens J. 2019; 19: 6497-6507. [CrossRef] [Google scholar]
  6. Wolpaw JR, Birbaumer N, Heetderks WJ, McFarland DJ, Peckham PH, Schalk G, et al. Brain-computer interface technology: A review of the first international meeting. IEEE Trans Rehabil Eng. 2000; 8: 164-173. [CrossRef] [Google scholar] [PubMed]
  7. Craik A, He Y, Contreras-Vidal JL. Deep learning for electroencephalogram (EEG) classification tasks: A review. J Neural Eng. 2019; 16: 031001. [CrossRef] [Google scholar] [PubMed]
  8. Sayılgan E. Bağımsız Bileşen Analizi ve Makine Öğrenmesi Kullanılarak Omurilik Yaralanması Olan Kişilerden Alınan EEG Sinyallerinden El Hareketlerinin Sınıflandırılması. Karadeniz Fen Bilimleri Dergisi. 2024; 14: 1225-1244. [CrossRef] [Google scholar]
  9. Iandolo R, Marini F, Semprini M, Laffranchi M, Mugnosso M, Cherif A, et al. Perspectives and challenges in robotic neurorehabilitation. Appl Sci. 2019; 9: 3183. [CrossRef] [Google scholar]
  10. McFarland DJ. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002; 113: 767-791. [CrossRef] [Google scholar] [PubMed]
  11. Tonin A, Semprini M, Kiper P, Mantini D. Brain-computer interfaces for stroke motor rehabilitation. Bioengineering. 2025; 12: 820. [CrossRef] [Google scholar] [PubMed]
  12. Galbert A, Buis A. Active, actuated, and assistive: A scoping review of exoskeletons for the hands and wrists. Can Prosthet Orthot J. 2024; 7: 43827. [CrossRef] [Google scholar] [PubMed]
  13. Mukherjee P, Roy AH. EEG sensor driven assistive device for elbow and finger rehabilitation using deep learning. Expert Syst Appl. 2024; 244: 122954. [CrossRef] [Google scholar]
  14. Ding Y, Udompanyawit C, Zhang Y, He B. EEG-based brain-computer interface enables real-time robotic hand control at individual finger level. Nat Commun. 2025; 16: 5401. [CrossRef] [Google scholar] [PubMed]
  15. Yang S, Li M, Wang J, Shi Z, He B, Xie J, et al. A low-cost and portable wrist exoskeleton using EEG-sEMG combined strategy for prolonged active rehabilitation. Front Neurorobot. 2023; 17: 1161187. [CrossRef] [Google scholar] [PubMed]
  16. Lantada AD, Morgado PL. Rapid prototyping for biomedical engineering: Current capabilities and challenges. Annu Rev Biomed Eng. 2012; 14: 73-96. [CrossRef] [Google scholar] [PubMed]
  17. Bouteraa Y, Ben Abdallah I, Alnowaiser K, Islam MR, Ibrahim A, Gebali F. Design and development of a smart IoT-based robotic solution for wrist rehabilitation. Micromachines. 2022; 13: 973. [CrossRef] [Google scholar] [PubMed]
  18. Mamo HB, Adamiak M, Kunwar A. 3D printed biomedical devices and their applications: A review on state-of-the-art technologies, existing challenges, and future perspectives. J Mech Behav Biomed Mater. 2023; 143: 105930. [CrossRef] [Google scholar] [PubMed]
  19. Mayetin U, Kucuk S. Design and experimental evaluation of a low cost, portable, 3-dof wrist rehabilitation robot with high physical human–robot interaction. J Intell Robot Syst. 2022; 106: 65. [CrossRef] [Google scholar]
  20. Sayilgan E. Design of A Low-Cost Wrist Rehabilitation Robot for Home Use. Proceedings of the 2024 Medical Technologies Congress (TIPTEKNO); 2024 October 10-12; Mugla, Turkiye. Piscataway, NJ: IEEE. doi: 10.1109/TIPTEKNO63488.2024.10755320. [CrossRef] [Google scholar]
  21. Schabron B, Reust A, Desai J, Yihun Y. Integration of forearm sEMG signals with IMU sensors for trajectory planning and control of assistive robotic arm. Annu Int Conf IEEE Eng Med Biol Soc. 2019; 2019: 5274-5277. [CrossRef] [Google scholar] [PubMed]
  22. Avci MB, Sayilgan E. Effective SSVEP frequency pair selection over the GoogLeNet deep convolutional neural network. Proceedings of the 2022 Medical Technologies Congress (TIPTEKNO); 2022 October 31-November 02; Antalya, Turkey. Piscataway, NJ: IEEE. doi: 10.1109/TIPTEKNO56568.2022.9960170. [CrossRef] [Google scholar]
  23. Saadoon YA, Khalil M, Battikh D. Machine and deep learning-based seizure prediction: A scoping review on the use of temporal and spectral features. Appl Sci. 2025; 15: 6279. [CrossRef] [Google scholar]
  24. Iturrate I, Chavarriaga R, Millán J, del R. General principles of machine learning for brain-computer interfacing. Handb Clin Neurol. 2020; 168: 311-328. [CrossRef] [Google scholar] [PubMed]
  25. Sengupta N, Rao AS, Yan B, Palaniswami M. A survey of wearable sensors and machine learning algorithms for automated stroke rehabilitation. IEEE Access. 2024; 12: 36026-36054. [CrossRef] [Google scholar]
  26. Heng W, Solomon S, Gao W. Flexible electronics and devices as human-machine interfaces for medical robotics. Adv Mater. 2022; 34: e2107902. [CrossRef] [Google scholar] [PubMed]
  27. Mukherjee P, Roy AH. A deep learning-based comprehensive robotic system for lower limb rehabilitation. Biomed Signal Process Control. 2025; 100: 107178. [CrossRef] [Google scholar]
  28. De S, Mukherjee P, Roy AH. GLEAM: A multimodal deep learning framework for chronic lower back pain detection using EEG and sEMG signals. Comput Biol Med. 2025; 189: 109928. [CrossRef] [Google scholar] [PubMed]
  29. Sayilgan E. Classifying EEG data from spinal cord injured patients using manifold learning methods for brain-computer interface-based rehabilitation. Neural Comput Appl. 2025; 37: 13573-13596. [CrossRef] [Google scholar]
  30. Ofner P, Schwarz A, Pereira J, Wyss D, Wildburger R, Müller-Putz GR. Attempted arm and hand movements can be decoded from low-frequency EEG from persons with spinal cord injury. Sci Rep. 2019; 9: 7134. [CrossRef] [Google scholar] [PubMed]
  31. Srimadumathi V, Reddy MR. Classification of Motor Imagery EEG signals using high resolution time-frequency representations and convolutional neural network. Biomed Phys Eng Express. 2024; 10: 035025. [CrossRef] [Google scholar] [PubMed]
  32. GitHub. EEGNet for Motor Imagery Classification [Internet]. GitHub; 2021 [cited date 2025 August 14]. Available from: https://github.com/amrzhd/EEGNet.
  33. Yıldırım E, Aydın F, Başer O, Aydemir Ö. Using steady state visually evoked potential-based brain computer interface in multiple choice questions. J Investig Eng Technol. 2024; 6: 61-69. [Google scholar]
  34. Dandamudi EG. NeuroAssist: Enhancing cognitive-computer synergy with adaptive AI and advanced neural decoding for efficient EEG signal classification. arXiv. 2024. doi: 10.48550/arXiv.2406.01600. [Google scholar]
Journal Metrics
2024
CiteScore SJR SNIP
1.20.2050.249
Newsletter
Download PDF Download Full-Text XML Download Citation
0 0

TOP