Skip to content

NeuroSyd/engramBNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

engramBNN

This is the git repo associated with Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks by Isabelle Aguilar (Electronic mail: iagu0459@sydney.edu.au), Luis Fernando Herbozo Contreras, and Omid Kavehei

Project Organization


├── README.md          <- This file
├── requirements.txt   <- The requirements file for reproducing the analysis environment on linux. 
├── src                
│   ├── SplitMNIST.py <- Training and evaluation for Split-MNIST experiments        
│   ├── CORe50.py <- Training and evaluation for CORe50 experiments
│   ├── PermutedMNIST.py <- Training and evaluation for Permuted-MNIST experiments (in Appendix)
│   │
│   ├── results <- Figure visualizations
│   │   └── fig3.ipynb
│   │   └── fig3.pdf
│   │   └── fig4.ipynb
│   │   └── fig4.pdf
│   │
│   ├── utils <- utils folder
│   │   └── datautils.py
│   │   └── trainutils.py
│   │   └── modelutils.py

Usage

Execute the script associated with the experiments you want to run (SplitMNIST.py, CORe50.py, PermutedMNIST.py). The hyperparameters can be changed at the top of each file beside the #Hyperparameter comment. Theses script use wandb to track the logs of the training.

If you find our work useful, please cite it as:

@article{aguilar2025stochastic,
  title={Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks},
  author={Aguilar, Isabelle and Herbozo Contreras, Luis Fernando and Kavehei, Omid},
  journal={arXiv preprint arXiv:2503.21436},
  year={2025}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published