Published in IEEE Transactions on AI [link] | arXiv:2508.09451
Authors: Ziyu Liu (ziyu.liu2@student.rmit.edu.au), Azadeh Alavi (azadeh.alavi@rmit.edu.au), Minyi Li, Xiang Zhang.
CoGenT is a unified self-supervised learning framework for time series that brings together the strengths of both contrastive and generative representation learning. Instead of relying on a single paradigm, CoGenT combines representation alignment with masked reconstruction within one architecture, enabling the model to learn features that are simultaneously discriminative and structure-aware. This unified design makes CoGenT broadly effective across diverse time-series datasets and tasks, while remaining simple, lightweight, and easy to integrate into existing pipelines.
Framework of the proposed CoGenT:
-
Unified contrastive–generative framework: Combines representation alignment and masked reconstruction in one architecture to learn both discriminative and structure-aware time-series features.
-
Consistent improvements across six datasets: CoGenT outperforms the standard SimCLR and MAE on all evaluated datasets covering different channels, frequencies, and class counts.
-
Strong overall performance: Achieves top F1 scores such as 0.9652 on FD and 0.9131 on FordA, with CoGenT delivering substantial gains over contrastive-only and generative-only baselines.
# Clone
git clone https://github.com/DL4mHealth/cogent.git
cd cogent
# Create a Python environment
python -m venv .venv
source .venv/bin/activate # mac/linux
# .venv\Scripts\activate # windows
pip install -r requirements.txt
requirements.txt should include:
einops==0.8.1
numpy==1.24.3
pandas==2.0.3
PyYAML==6.0.3
scikit_learn==1.3.0
scipy==1.10.1
sktime==0.29.1
timm==0.6.12
torch==2.4.1
torchmetrics==1.4.0.post0
tqdm==4.66.5
ucimlrepo==0.0.7
This example shows how to run both self-supervised pretraining and supervised finetuning on the FordA dataset.
The UCR datasets are loaded automatically via:
from sktime.datasets import load_UCR_UEA_dataset
No manual downloads are required.
Open:
config/UCR_config.yaml
Uncomment the FordA section and comment out the others:
# FordA
dataset_: "FordA"
n_class: 2
...
## other datasets...
#dataset_: "ChlorineConcentration"
#n_class: 3
#...Set pretrain: True in the same config file:
pretrain: True
# pretrain: FalseRun:
python CoGenT_pretrain.py --config config/UCR_config.yamlThis performs self-supervised pretraining on FordA.
To finetune using the pretrained model, keep the same flag:
pretrain: TrueRun:
python CoGenT_finetune.py --config config/UCR_config.yamlThis performs supervised finetuning using the pretrained checkpoint.
If you want supervised finetuning from scratch, switch the flag:
pretrain: FalseAnd run:
python CoGenT_finetune.py --config config/UCR_config.yamlThis runs finetuning without loading any pretrained weights.
If you find this work useful for your research, please consider citing this paper:
@article{liu2025unified,
title={A Unified Contrastive-Generative Framework for Time Series Classification},
author={Liu, Ziyu and Alavi, Azadeh and Li, Minyi and Zhang, Xiang},
journal={arXiv preprint arXiv:2508.09451},
year={2025}
}
This repository is released under the Apache-2.0 license. Please see the LICENSE file for details.
For questions regarding the code, please contact the author Ziyu Liu (ziyu.liu2@student.rmit.edu.au).
The paper states the intent to release full code for reproducibility.