This is the git repo associated with Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks by Isabelle Aguilar (Electronic mail: iagu0459@sydney.edu.au), Luis Fernando Herbozo Contreras, and Omid Kavehei
├── README.md <- This file
├── requirements.txt <- The requirements file for reproducing the analysis environment on linux.
├── src
│ ├── SplitMNIST.py <- Training and evaluation for Split-MNIST experiments
│ ├── CORe50.py <- Training and evaluation for CORe50 experiments
│ ├── PermutedMNIST.py <- Training and evaluation for Permuted-MNIST experiments (in Appendix)
│ │
│ ├── results <- Figure visualizations
│ │ └── fig3.ipynb
│ │ └── fig3.pdf
│ │ └── fig4.ipynb
│ │ └── fig4.pdf
│ │
│ ├── utils <- utils folder
│ │ └── datautils.py
│ │ └── trainutils.py
│ │ └── modelutils.py
Execute the script associated with the experiments you want to run (SplitMNIST.py, CORe50.py, PermutedMNIST.py). The hyperparameters can be changed at the top of each file beside the #Hyperparameter comment. Theses script use wandb to track the logs of the training.
If you find our work useful, please cite it as:
@article{aguilar2025stochastic,
title={Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks},
author={Aguilar, Isabelle and Herbozo Contreras, Luis Fernando and Kavehei, Omid},
journal={arXiv preprint arXiv:2503.21436},
year={2025}
}