Full-stack business impact analysis tool with a React/Vite frontend, an Express backend, and a FastAPI ML service. The frontend now calls the backend /predict API, which in turn delegates to the ML service.
- Frontend: React 18 + Vite (located in
frontend/, source in/src) - Backend: Express (port 8000) in
backend/ - ML service: FastAPI (port 9000) in
ml/prediction_service
docker-compose up --buildServices and ports:
- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
- ML service: http://localhost:9000
Volumes keep source code mounted for hot reload:
- Frontend mounts the repo to
/appso/srcis available to Vite - Backend mounts
backend/publicandbackend/datasets
- ML service
cd ml/prediction_service
python -m venv .venv && .venv/Scripts/activate # Windows
pip install -r requirements.txt
uvicorn app:app --host 0.0.0.0 --port 9000- Backend
cd backend
npm install
set ML_SERVICE_URL=http://localhost:9000
npm start- Frontend
cd frontend
npm install
set VITE_API_URL=http://localhost:8000
npm run dev -- --host --port 5173POST /predict- Body:
{ businessType, scale, locationKey, locationLabel?, contextSignals?, query? } - Returns prediction payload + AI explanation
- Body:
GET /predict/locations– available location profilesPOST /simulate– simulation endpointGET /health– service health check
- Frontend:
VITE_API_URL(defaults tohttp://localhost:8000) - Backend:
ML_SERVICE_URL(defaults tohttp://localhost:9000)
- User adds a business and places it on the map
- Frontend sends a request to backend
/predict - Backend calls ML service
/predictand generates an AI explanation - Frontend merges ML results with local analytics for display and exports
- Hit
http://localhost:8000/healthto confirm backend is running - From the frontend, ensure predictions render after placing a business on the map