A Streamlit-powered interactive dashboard for analyzing credit card fraud patterns, comparing machine learning models, and visualizing detection metrics.
- Interactive Data Exploration: Visualize fraud patterns through univariate, bivariate, and multivariate analysis
- Model Comparison: Evaluate Logistic Regression, Naive Bayes, and Decision Tree performance
- Explainable AI: SHAP values for model interpretability
- Responsive Design: Mobile-friendly Streamlit interface
Short summary
A Streamlit dashboard for exploring insurance/transaction claims and running simple fraud-prediction experiments. The app (Nina.py) provides interactive pages for:
- Viewing claim metadata and a vertical "Claim Info" table
- Exploring similar cases ("Similarcases")
- Visual indicators and bivariate analysis ("Indicators")
- A simple network/relationships view ("Network")
- Collecting user feedback about predictions ("Feedback")
The app includes exploratory plots (Plotly, seaborn), simple machine‑learning models (Logistic Regression, Naive Bayes, KNN with GridSearchCV), and interactive widgets via Streamlit.
- Interactive multi-page layout (using
streamlit_option_menu) - Data inspection and styled tables
- Bivariate/univariate visualizations (Plotly / seaborn)
- Model training/evaluation (Logistic Regression, Naive Bayes, KNN)
- Feedback widget (thumbs up / down)
- Example image display in the UI
- Create a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # macOS / Linux
.venv\Scripts\activate # Windows (PowerShell)- Install dependencies
pip install streamlit streamlit-option-menu streamlit-feedback pandas numpy plotly scikit-learn seaborn matplotlib- Run the app
streamlit run Nina.pyOpen the URL Streamlit prints (usually http://localhost:8501).
From the sidebar, if you select "Show the analysis", the app will display several charts that explore the dataset.
For example:
The charts include:
- Distribution of transaction amount
- Distribution of transaction time
- Boxplots comparing transaction values across classes
These visualizations help to better understand the data patterns before applying machine learning models.
By selecting "Compare Algorithms" from the sidebar, the app evaluates multiple classification models and compares their accuracy on the testing set.
In this case, the following models were tested:
- Logistic Regression – Accuracy ≈ 99.92%
- Naive Bayes – Accuracy ≈ 97.77%
- Decision Tree – Accuracy ≈ 99.92%
This visualization makes it easy to compare how different machine learning algorithms perform on the dataset, helping to choose the best approach for fraud detection.

