A smart Chrome Extension that uses a hybrid defense system—including a local AI Machine Learning model—to preview and evaluate the safety of links before you click.
Phishing attacks are getting smarter. Traditional security relies on "blacklists" (like Google Safe Browsing), which are accurate but reactive—they only catch threatening sites after they have been reported.
SafeLink HoverGuard takes a "Defense-in-Depth" approach. It combines traditional checks with a proactive AI analysis engine that examines the lexical structure of a URL in milliseconds to flag suspicious patterns (like high entropy, excessive hyphens, or IP address hosts) used in zero-day phishing attacks.
- 🤖 AI-Powered Phishing Detection: A local Python backend uses a trained Random Forest classifier to analyze URL text patterns and assign a risk confidence score.
- 🕵️♂️ Instant Link Preview: Hold
Shiftand hover over any link to see a detailed safety card. - 🔄 Redirect Unfurling: Automatically resolves shortened links (e.g.,
bit.ly) to reveal the final destination URL. - 🌐 Domain & Protocol Inspection: Clearly highlights the target domain and warns if the site is unsecure (HTTP).
- 🛡️ Google Safe Browsing Integration: Cross-references URLs against Google's known threat database.
- ⚡ Blazing Fast: The AI model runs locally on your machine for near-instant analysis without sending your browsing history to third-party servers.
SafeLink operates as a full-stack application comprised of two parts:
- The Chrome Extension (Frontend): Built with React, TypeScript, and Vite. It handles user interaction, injects the tooltip into webpages, and coordinates security checks in the background script.
- The ML Server (Backend): A lightweight Python Flask API that serves a scikit-learn Machine Learning model. It accepts URLs from the extension, extracts lexical features, and returns a phishing probability score.
Because this is a full-stack project, you need to set up both the Python backend and the Chrome extension frontend.
- Node.js (v16+) and npm
- Python (v3.8+) and pip
- Google Chrome browser
- Navigate to the ML server directory:
cd ml_server - Install Python dependencies:
pip install flask flask-cors scikit-learn pandas joblib
- Train the AI Model: (Note: You must provide your own dataset named
dataset_phishing.csvin this folder first).python3 train_model.py
- Start the API Server:
Keep this terminal window running. The server should listen on
python3 app.py
http://0.0.0.0:5001.
- Open a new terminal window in the project root directory.
- Install JavaScript dependencies:
npm install
- (Optional) Set up Google Safe Browsing: Create a
.envfile in the root and add your API key:VITE_GOOGLE_SAFE_BROWSING_API_KEY=your_key_here. - Build the extension:
This creates a
npm run build
distfolder with the final extension files.
- Open Chrome and navigate to
chrome://extensions. - Toggle Developer mode in the top right corner.
- Click Load unpacked.
- Select the
distfolder generated in the previous step.
- Ensure your Python ML server (
app.py) is running in a terminal. - Navigate to any webpage (or open an email draft).
- Hold down the
Shiftkey and hover your mouse over a link. - The SafeLink tooltip will appear, showing the analysis results.
Distributed under the MIT License. See LICENSE for more information.

