generated from nighthawkcoders/student_2025
-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Overview, Issues, and Demo
Issues
- Initial Planning: https://github.com/SanPranav/QcommVNE_Frontend/issues/43
- Burndown List: https://github.com/SanPranav/QcommVNE_Frontend/issues/52#issue-3081950342
- General Timeline: https://github.com/SanPranav/QcommVNE_Frontend/issues/48
- Tracking Sheet: https://github.com/SanPranav/QcommVNE_Frontend/issues/47
Demo/Key Progress/5 Things
- Past Fire Locations/Data: Created a Leaflet.js map with fire locations in the United States from NASA FIRMS data from 2015-2022 (historical data). The map is color coded based on metrics like FRP (fire radiative power) and fire brightness. The data is dynamic and the user selects a specific month and year and the data is then shown. The backend utilizes preprocessing to ensure fast user experience, the data is preprocessed with pandas and then when the endpoint is called the data is sent to the frontend fast, enabling fast visualization on the map due to low process times and data segmentation.
- Other Graphical Representations (Frp vs Brightness, Brightness histogram, Fires line chart): Created a Leaflet.js map with fire locations in the United States from NASA FIRMS data from 2015-2022 (historical data). The various graphs described show various parts of the data and highlight various key trends of the data.
- Machine Learning Models Functionality: In addition to geospatial visualization, three machine learning models were implemented: Facebook’s Prophet for time series forecasting of monthly fire trends, polynomial regression for modeling non-linear temporal patterns in fire frequency, and unsupervised clustering (DBSCAN) to group fires based on location and intensity. These models enable users to explore both spatial and temporal fire trends, identify fire-prone regions, and examine predicted changes over time.
- Machine Learning API & Frontend Implementation: Created the respective model classes and API classes for the model to be then be implemented on the frontend. Had to utilize various methodologies such as Base64 image conversion to show images on the frontend, utilized advanced parsing methods to send animation data to frontend to then be displayed in an animated matter (work in progress), and many other methods to enable data transfer and efficient prediction of ML models (advanced regression/prophet, polynomial regression, clustering) (clustering and polynomial work in progress)
- Lesson/Tutorial: Created a blog accessible from the dashboard to teach the fundamental data science and machine learning used in the historical data dashboard to enable student understanding and future use.
Note: What is Prophet?
- Prophet is a forecasting model developed by Meta for producing high-quality forecasts of time series data. It's particularly effective for data with strong seasonality and several seasons of historical data. Prophet is based on an additive model, decomposing time series into trend, seasonal, and holiday components, and then combining their predictions
Skill Demonstration:
- Software Engineering Practices: Throughout the project, software engineering principles were used to organize and improve the development process. Planning changes was done before implementing any of the machine learning models, such as Prophet for forecasting, polynomial regression, or clustering. All code was clearly commented to explain key steps and logic in the backend, especially in data preprocessing and model use.
- Software Development Lifecycle Practices: Source control was managed with GitHub, where all work was pushed to a central repository. Feature changes were developed using forks and branches—one for frontend integration, one for backend models, and one for data preprocessing. This separation kept things modular and reduced bugs. Changes were merged through pull requests after review and testing.
- Data Types: Various data types were used across both Python and JavaScript. Numbers were critical for model inputs like FRP and brightness. Strings were used to parse, filter, and label dates. Booleans controlled whether elements should be displayed (e.g., loading or error states). Arrays were used to store time series data, clusters, or spatial coordinates. JSON objects passed structured data between the Python backend and JavaScript frontend. SQLite tables (optional) were considered for caching and fast retrieval of preprocessed datasets for efficient filtering and map rendering.
- Operators: Mathematical operations powered machine learning models, especially regression and clustering. String operations handled date parsing, filtering by month or year, and formatting. Boolean expressions were used to determine when to load data, display loading animations, or handle user input logic.
- Control Structures: Iteration over rows of data (using loops or pandas) helped generate aggregates and clean datasets. Conditional statements handled missing data, unexpected inputs, or year/month mismatches. Nested conditions ensured multiple filters could be applied in sequence. try/except blocks in Python helped manage file errors and data processing failures. .then/.catch blocks in JavaScript handled asynchronous fetch calls from the user interface, ensuring smooth data loading even under errors or delays.
- Input/Output: HTML5 inputs (dropdowns) let users select a year and month dynamically. Input validation prevented users from triggering map loads without valid selections. The DOM was used to show and hide interface elements such as loaders or messages. JavaScript selected these elements using getElementById and querySelector, and class changes like hidden and loading were toggled to reflect state changes. Data was fetched asynchronously and inserted into the map via DOM manipulation and Leaflet.js rendering.
- Classes: Backend Python code used classes to encapsulate each machine learning model. These classes had methods for training, prediction, and output formatting. This modular approach helped test each model independently and made it easier to swap in alternate algorithms. On the frontend, JavaScript also used object-like structures to manage user interaction state and dynamic rendering of map layers. T
Some Images
Commit Evidence
Frontend Commits
Commit Group: Making all the commits for the ML implementation for the Advanced regression model

Commit Group: Making all the commits for the later dashboard graphs (line chart, histogram, etc.)

Commit Group: Making all the commits for the initial dashboard, map, and historical dataset integration

Commit Group: Making all the commits for the datascience blog

Backend Commits
Commit Group: Making the base maps and preprocessing the data

Commit Group: Making foundation models and experimenting with architecture and data

Commit Group: Making models and API classes for ML models

Commit Group: Testing models and API classes

Commit Group: Setup for frontend implementation

Lesson Commits
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels


