2025 NASA Space Apps Challenge
A World Away: Hunting for Exoplanets with AI
Group: vibe coding only
- Nick
- Carl
- Mahad
- Lynn
- Tahmid
- Javier
This repository contains a small Flask backend (model inference) and a React + TypeScript frontend. Below are concise, copy-paste PowerShell instructions to get both running on Windows. These steps assume you have Python 3.10+ and Node.js installed.
- Open PowerShell and change to the backend folder:
cd .\backend- Create a virtual environment and activate it (recommended):
python -m venv .venv
. .venv\Scripts\Activate.ps1- Install Python dependencies. If a requirements file isn't present, install the essentials used by the project:
pip install --upgrade pip
pip install flask flask-cors flask-socketio tensorflow numpy-
Make sure the model file
exoplanet_lightcurve_cnn.h5exists in thebackend/folder. The/predictendpoint expects JSON with fieldstime(CSV string),flux(CSV string),period, andt0. -
Run the backend server (development):
# from backend folder
py app.py- Open a new PowerShell window and change to the frontend folder:
cd ..\frontend- Install node dependencies:
npm install- Start the dev server:
npm startBy default the dev server runs on http://localhost:3000
- Useful test inputs are in
backend/testcases/(JSON files). Each file containstime,flux(both CSV strings) andperiod,t0fields. You can POST these files directly to the backend for a quick check.
Example PowerShell POST (send a saved testcase to the backend):
$body = Get-Content .\backend\testcases\kepler227_synth.json -Raw
Invoke-RestMethod -Uri http://localhost:5000/predict -Method Post -Body $body -ContentType 'application/json'If you get a CORS or network error when the frontend tries to call the backend, verify both servers are running and check the browser console (DevTools) for the actual error.
- The backend's preprocessing mirrors the training code: flux is median-centered and scaled by std, the time domain is phase-folded using period and t0, then interpolated to the model input size (501 samples). If you modify preprocessing in
backend/app.py, re-check parity withbackend/ai_model_training/data_processing.py(training code). - If the model returns unexpectedly confident outputs on tiny inputs, provide longer, higher-sample testcases (the
backend/testcases/folder contains some exported Kepler examples and a synthesized confirmed-like transitkepler227_synth.json). - To enable detailed debugging in the backend (raw logits, preprocessed array), update
/predictto return additional fields guarded by a query parameter (e.g.?debug=1).
If anything is unclear or you'd like me to add an automated script to start both servers (or to create a requirements.txt / package.json scripts), tell me and I can add it.