PerfSpectra is an interactive Streamlit-based tool for evaluating classification models. It helps researchers, data scientists, and ML practitioners analyze model performance with confusion matrices, classification reports, and mismatch analysis.
This tool is especially useful for writing shared task papers, as it automates error analysis and provides downloadable reports in multiple formats (CSV, PDF, TXT).
β
Confusion Matrix π β Visualizes model performance
β
Classification Report π β Precision, Recall, F1-score
β
Mismatch Analysis π β Highlights misclassified samples
β
Downloadable Reports π₯ β Get insights in CSV, PDF, and TXT
β
Multi-file Support π β Compare multiple prediction files
β
Interactive UI π¨ β Built with Streamlit for ease of use
1οΈβ£ Clone the repository:
git clone https://github.com/RJ-Hossan/PerfSpectra.git
cd PerfSpectra2οΈβ£ Install dependencies:
pip install -r requirements.txt3οΈβ£ Run the app:
streamlit run app.py1οΈβ£ Upload True Labels (CSV) with columns:
Id(Unique identifier)Label(True class labels)
2οΈβ£ Upload Prediction Files (CSV) with columns:
Id(Matching unique identifier)Label(Predicted class labels)
3οΈβ£ Get accuracy, confusion matrix, classification report, and mismatches
4οΈβ£ Download reports in CSV, PDF, or TXT format
Id and Label (case-insensitive)
This project is open-source, and contributions are welcome!
1οΈβ£ Fork the repo π΄
2οΈβ£ Create a new branch π
3οΈβ£ Make your changes β¨
4οΈβ£ Submit a pull request π©
MIT License β Feel free to use and modify!
π¬ Have suggestions? Want to contribute? Drop an issue or connect with me on LinkedIn
β If you find this useful, don't forget to star the repo! π