This repo now keeps only the pieces required to train, evaluate, search for a learning rate, and run W&B sweeps. Everything routes through launch.sh.
| Command | What it does | Defaults |
|---|---|---|
./launch.sh run |
Train using scripts/run.py |
config: configs/exp/att_clp/baseline.yaml, W&B on |
./launch.sh test |
Evaluate a checkpoint via scripts/test.py |
expects `--ckpt best |
./launch.sh lr |
Two-stage LR + scheduler sweep via src/opt/parallel_sweep.py |
config: configs/config.yaml |
./launch.sh sweep |
Create a W&B sweep and launch local agents | requires -c sweep.yaml |
-c, --config PATH— override the config file (sweep mode expects a sweep YAML).-g, --gpu VALUE— forrun/testsupply a count (e.g.-g 2). Forlr/sweepsupply GPU ids (e.g.-g 0,1,2,3).-w, --wandb {0,1}— toggle W&B logging forrun/test.--save— enable checkpoint saving duringrun.--ckpt PATH— checkpoint path (orbest/last) fortest.--dry-run— preview LR sweep without launching jobs.-e/--entity,-p/--project,--count— W&B sweep options.
configs/exp/att_clp/baseline.yaml— fast baseline for sanity checks.configs/exp/logg/linear.yaml— linear regression baseline for simultaneousT_eff/log_g/M_H.configs/config.yaml— consolidated master config with every tunable field documented.
When model.name: linear_regression, scripts/run.py bypasses Lightning entirely and fits sklearn.linear_model.LinearRegression on the spectral pixels before running the same regression-analysis plots. Example:
export TRAIN_DIR=/path/to/train
export VAL_DIR=/path/to/val
export TEST_DIR=/path/to/test
./launch.sh run -c configs/exp/logg/linear.yaml -w 0
data.paramlists every label you want ("T_eff, log_g, M_H"). The helper infers output dimensionality + plot titles automatically.- Outputs mirror the ViT runs:
RegressionPlotterdenormalizes with the same stats and either uploads to W&B (when-w 1) or saves locally toviz.save_dir/./results/test_plots. - No optimizer/checkpoints are involved—the model is solved in closed form via scikit-learn, so the
opt/trainsections are ignored besides the dataset sizes.
# Create sweep + launch agents on GPUs 0 and 1
./launch.sh sweep -c configs/sweep.yaml -e my-entity -p my-project -g 0,1
The script creates the sweep via the W&B CLI, prints the sweep ID, and spawns one agent per GPU.
./launch.sh lr -c configs/config.yaml -g 0,1,2,3
This runs a 7-value LR sweep, grabs the best LR, then compares schedulers using that LR. Results live in opt_runs/sweep/.
./launch.sh test --ckpt checkpoints/latest.ckpt
Runs scripts/test.py, which simply loads the config, attaches the datamodule, and calls Lightning's Trainer.test.
./launch.sh run --save -w 1
Trains with the baseline config (or the file provided via -c).
Everything else (old helper scripts, cached results, etc.) has been removed to keep the repo focused on these four entry points.