This project provides a comprehensive, extensible, and user-friendly framework for creating, testing, and deploying memcapacitor-based neuromorphic computing models. It is designed for both local experimentation and powerful, automated grid searches on remote GPU servers.
- Project Structure
- Getting Started
- Running Experiments
- Extensibility
- Project Philosophy
- Scientific Best Practices
.
├── configs/
│ ├── best_lorenz_config.yaml
│ └── lorenz_random_search.yaml
├── datasets/
│ ├── lorenz/
│ │ ├── __init__.py
│ │ ├── generator.py
│ │ └── reporting.py
│ └── mackey_glass/
│ ├── __init__.py
│ ├── generator.py
│ └── reporting.py
├── experiments/
│ └── grid_search.py
├── models/
│ └── memcapacitor.py
├── networks/
│ ├── reservoir.py
│ └── topologies/
│ ├── small_world.py
│ └── random.py
├── remote/
│ ├── run_remote_experiment.py
│ ├── run_remote_experiment.md
│ ├── setup.py
│ └── pods.yml
├── tests/
│ ├── test_memcapacitor_verbose.py
│ └── ...
├── training/
│ ├── train.py
│ └── outputs/
├── .gitignore
├── README.md
└── requirements.txt
configs/: Holds YAML configuration files for experiments, defining hyperparameters and settings for both local runs and remote grid searches.datasets/: Contains data loading modules. Each subdirectory is a self-contained dataset (e.g.,lorenz,mackey_glass), responsible for generating or loading its specific time-series data.experiments/: Home to the main grid search driver (grid_search.py), which parallelizes training runs based on a given configuration file.models/: Core device models, such as theMemcapacitoritself. These define the fundamental building blocks of the reservoir.networks/: Includes theMemcapacitiveReservoirimplementation and network topology generators.topologies/: Each file (e.g.,small_world.py) defines a specific network structure generator.
remote/: Contains all scripts for the fully automated remote workflow, including the main orchestrator (run_remote_experiment.py), server setup scripts, and configuration files.tests/: A suite of verbose, behavioral tests that produce human-readable output and diagnostic plots to verify the correctness of each component.training/: Contains the core training pipeline (train.py), which can be run standalone or as part of a grid search.outputs/: The default directory where all training artifacts (plots, logs, and saved models) are stored.
Follow these steps to get the project up and running on your local machine.
git clone https://github.com/WillForEternity/MemoryCapacitorTopologies.git
cd MemoryCapacitorTopologiespython3 -m venv venv
source venv/bin/activatepip install -r requirements.txtVerify your setup by running the verbose, self-documenting tests. These tests check core functionalities and produce visual outputs for inspection.
python tests/test_memcapacitor_verbose.py
python tests/test_topology_small_world_verbose.py
python tests/test_dataset_mackey_glass_verbose.py
python tests/test_reservoir_forward_verbose.pyThis project includes a full example of training a reservoir computer to predict the Lorenz attractor. The optimal hyperparameters, found via a remote grid search, are stored in configs/best_lorenz_config.yaml.
To run this pre-configured example locally:
python -m training.train --config configs/best_lorenz_config.yaml --plotThis command trains the model and saves an interactive 3D plot of the predicted vs. actual Lorenz attractor to training/outputs/lorenz_predictions_3d.html.
For comprehensive hyperparameter sweeps, the repository includes a powerful orchestration script that fully automates the process on a remote GPU server (e.g., from RunPod, Vast.ai, etc.).
For a complete guide on the remote workflow, see remote/run_remote_experiment.md.
- Commit Your Changes: Ensure all local code changes are pushed to GitHub.
git add . && git commit -m "Your changes" && git push
- Kill Old Processes (Recommended): Free up resources on the remote server.
ssh -i ~/.ssh/id_ed25519 -p YOUR_PORT user@YOUR_IP 'pkill -f grid_search.py || true'
- Launch the Orchestrator: This script handles server provisioning, setup, execution, and results retrieval.
python remote/run_remote_experiment.py
This script provides a beautifully formatted, real-time log, including a static, single-line progress bar for the grid search.
Adding your own components is designed to be simple:
- Add a Topology: Create
networks/topologies/your_topology.py. Inside, define a function that returns a NumPy adjacency matrix and decorate it with@register("your_name"). - Add a Dataset: Create a
datasets/your_dataset/directory containing agenerator.pyfile that exposes aload()function.
The framework will automatically discover and register these new components.
This project is written so that any human researcher or AI agent can reason about the codebase with minimal context switching.
- Verbose, Natural-Language Tests: Every behavior test prints explanatory PASS/FAIL commentary and saves plots, allowing correctness to be judged directly from the output.
- Self-Documenting Modules: Each folder has a single, clear responsibility (
models/,datasets/,networks/). - Plug-and-Play Extensibility: New topologies and datasets are discovered automatically via decorators, requiring no central registration.
- Clear Naming & Typing: Functions and variables use descriptive names and type hints to make the code easy to understand and analyze.
This repository follows these guidelines:
- Deterministic Seeds: All random operations accept a
random_seedto ensure reproducibility. - Explicit Device Placement: Tensors are explicitly placed on the correct device (
cudaorcpu). - Separation of Concerns: Data generation, model definition, training, and evaluation are in distinct, modular components.
- Version-Controlled Results: Grid search scripts save the winning network and diagnostic plots, which are small enough to be committed to version control.
- Verbose Logging: Tests and training scripts print detailed, natural-language commentary for easy auditing.