This repository contains the official implementation of ConvoReleNet, a hybrid deep neural network combining Convolutional, Transformer, and Recurrent (BiLSTM) modules for Motor Imagery EEG (MI-EEG) classification.
The code accompanies the paper:
ConvoReleNet: A Multi-Branch CNN–Transformer–LSTM Hybrid Network with Transfer Learning for Motor Imagery EEG Decoding
(Frontiers in Neuroscience, 2025)
ConvoReleNet is designed to capture spatial, temporal, and contextual dependencies in EEG signals.
It integrates three complementary modules:
- Convolutional layers → learn localized spectral–spatial features.
- Transformer encoder → models long-range temporal dependencies via self-attention.
- BiLSTM → refines sequential patterns and stabilizes temporal learning.
The network supports:
- Training from scratch or transfer learning between datasets.
- Subject-wise cross-validation (LOSO-like) evaluation.
- Multiple activation variants (ELU, ReLU, Tanh).
Tested environment:
| Library | Version |
|---|---|
| Python | 3.8 |
| PyTorch | 1.10 |
| NumPy | ≥1.21 |
| SciPy | ≥1.7 |
| scikit-learn | ≥1.0 |
| MNE | ≥0.24 |
| matplotlib | ≥3.4 |
pip install torch==1.10 numpy scipy scikit-learn mne matplotlibYou can also use a virtual environment:
python3 -m venv convoenv
source convoenv/bin/activate # Linux / macOS
convoenv\Scripts\activate # Windows
pip install torch==1.10 numpy scipy scikit-learn mne matplotlib
This project uses publicly available Motor Imagery EEG datasets.
Download any or all of the following:
| Dataset | Description | Download Link |
|---|---|---|
| BCI Competition IV – 2a | 4-class MI EEG (22 channels) | Download |
| BCI Competition IV – 2b | 2-class MI EEG (3 channels) | Download |
| Weibo 2014 Dataset | MI EEG benchmark | Download |
| PhysioNet EEG Motor Movement/Imagery | Open EEG dataset | Download |
After downloading, organize your folder as follows:
/datasets/
├── BNCI_IV_2a/
├── BNCI_IV_2b/
├── Weibo2014/
└── PhysioNet_MMI/
The code automatically detects datasets from this directory.
python ConvoReleNet_MIEEG.py --dataset IV2a --mode scratchpython ConvoReleNet_MIEEG.py --pretrained IV2b --finetune IV2a --mode transferpython ConvoReleNet_MIEEG.py --dataset IV2a --cv subjectwiseAdd --device cpu if you do not have a GPU.
After training, the script generates:
- Accuracy, F1-score, and Cohen’s κ (per subject)
- Confusion matrices
- Training and validation curves
- CSV summary files and plots in the
results/directory
- You do not need prior EEG experience. Follow the steps above step-by-step.
- The model and preprocessing pipelines are fully implemented in PyTorch.
- Training may take ~15–30 min on GPU, or longer on CPU.
- You can change hyperparameters in the script directly (e.g., number of filters, heads, layers).
If you use this code or derivative work, you must cite the associated paper:
@article{ConvoReleNet2025,
title={ConvoReleNet: A Multi-Branch CNN–Transformer–LSTM Hybrid Network with Transfer Learning for Motor Imagery EEG Decoding},
author={Your Name and Co-authors},
journal={Frontiers in Neuroscience},
year={2025}
}
This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
You are free to use, share, and modify the code as long as you give appropriate credit and cite the original publication.
Example acknowledgment:
“This research used the ConvoReleNet framework (Kyzyrkanov et al., 2025) available under CC BY 4.0.”
Author: Your Name
Email: your.email@example.com
Repository: https://github.com/YourUsername/ConvoReleNet-MI-EEG
For questions or collaboration, feel free to open an issue or reach out via email.
- Python 3.8 / Torch 1.10
- Download EEG datasets (links above)
- Run
ConvoReleNet_MIEEG.pyto reproduce results - Cite the paper if you use the code
Enjoy experimenting with ConvoReleNet!