SafeLink-AI Deployed Link- https://pngd79pm-5000.inc1.devtunnels.ms/ AI-powered malicious link detection system
SafeLink-AI is a security-focused application designed to detect malicious or unsafe URLs using machine learning. It helps users identify phishing, malware, and suspicious links before interacting with them.
π Project Overview
Malicious links are a major attack vector for phishing and malware distribution. SafeLink-AI analyzes URLs using a trained ML model and predicts whether a link is safe or malicious.
The project is divided into three independent components:
Frontend β User interface
Backend β API & prediction logic
ML Model β Training and feature extraction
π§ Key Features
Detects malicious URLs using ML
Clean separation of frontend, backend, and ML logic
Lightweight and modular design
Easy to retrain model with new data
Secure repository (no model files exposed)
ποΈ Project Structure SafeLink-AI-FullProject β βββ backend/ # API & server-side logic βββ frontend/ # User interface β βββ ml-model/ # Machine learning module β βββ train_model.py β βββ requirements.txt β βββ README.md βββ .gitignore
βοΈ Tech Stack Machine Learning
Python
Scikit-learn
NLP feature extraction
Backend
Python
Flask / FastAPI (based on implementation)
Frontend
HTML / CSS / JavaScript (or React if used)
Tools
Git & GitHub
VS Code
π How to Run the Project 1οΈβ£ Clone the Repository git clone https://github.com/thorrwho/SafeLink-AI.git cd SafeLink-AI-FullProject
2οΈβ£ Set Up ML Environment cd ml-model pip install -r requirements.txt python train_model.py
Note: Trained model files are intentionally excluded from GitHub for security and size reasons.
3οΈβ£ Start Backend Server cd backend
npm install
node server.js
### π 3. Frontend Application
The frontend is served directly by the backend. Once the backend is running, you can access the application in your web browser.
1. **Open your browser** and navigate to:
[http://localhost:5000](http://localhost:5000)
2. Use the new tabbed interface to choose your input method: **Text**, **URL**, or **File**.
3. After a scan, you will be redirected to the redesigned **Results** page with a risk gauge and detailed analysis.
4. Visit the **Dashboard** (`http://localhost:5000/dashboard.html`) to see the redesigned analytics on all scans performed.
---
π Security Note
Trained model files (.pkl) are excluded using .gitignore
This prevents accidental exposure of large or sensitive files
Models can be regenerated locally using the training script
π Academic Use
This project is suitable for:
AI / ML Mini Projects
Cyber Security demonstrations
Full-stack ML applications
Viva and project evaluations
π€ Author
Tharini Naveen, Tasheen Khan, Malavika I R
B.Tech β Artificial Intelligence & Machine Learning
Vidyavardhaka College of Engineering
π License
This project is intended for educational purposes only.