Skip to content

A real-time weapon detection system for security surveillance that identifies firearms and distinguishes them from harmless objects. The system processes video input, highlights detected weapons with bounding boxes, and can extract frames containing weapons for further analysis. Built using deep learning (Faster R-CNN with PyTorch) and OpenCV.

Notifications You must be signed in to change notification settings

Pushkkaarr/Weapon-Detection-Using-Fast-R-CNN

Repository files navigation

πŸ›‘οΈ Real-Time Weapon Detection System for Security Surveillance

This project is a computer vision system designed to detect weapons in video streams or files. It is intended to enhance public safety by identifying dangerous objects such as firearms in real-time or pre-recorded surveillance footage.


🎯 Target Weapons for Detection

The system can detect the following:

  • Primary Weapon Classes (Required)
    • πŸ”« Firearms: handguns, pistols, rifles.
    • βœ… No Weapon: normal or safe scenarios.

Note: The system is trained to minimize false alarms from harmless objects, such as toys


πŸ”§ Project Modules

1️⃣ video_inference.py

  • Purpose: Processes a video and produces an output video with red bounding boxes around detected weapons.
  • How it works:
    • Loads the trained Faster R-CNN model (detector_epochX.pth).
    • Reads a video file frame by frame.
    • Runs object detection on each frame.
    • Draws bounding boxes and class labels on detected weapons.
    • Saves the resulting video as .avi or .mp4.
  • Configurable paths:
    input_video_path = "input_video.mp4"  # Set your original video here
    output_video_path = "output.avi"      # Video with weapon detections
    checkpoint_path = "models/detector_epoch3.pth"  # Trained model path
  • Run Command:

    python src/video_inference.py

2️⃣ extract_weapon_frames.py

  • Purpose: Extracts frames from a video where weapons are detected, and saves them as images with bounding boxes applied.

  • How it works:

    • Loads the trained Faster R-CNN model.
    • Reads the input video frame by frame.
    • For frames with weapons detected above a confidence threshold, saves the frame to a folder.
    • Names the frames with the timestamp in seconds where the weapon appears.
  • Configurable paths:

    input_video_path = "input_video.mp4"   # Original video path
    frames_output_dir = "weapon_frames/"   # Folder to save extracted frames
    checkpoint_path = "models/detector_epoch3.pth"  # Trained model
  • Run Command:

    python src/extract_weapon_frames.py
  • Output: Each frame image contains red boxes around weapons and is named as frame_XXs.jpg.


3️⃣ train_detector.py

  • Purpose: Train the Faster R-CNN model on your annotated dataset.

  • How it works:

    • Reads images and JSON annotations from data/train and data/val.
    • Creates a PyTorch dataset using JsonDetectionDataset.
    • Trains a Faster-RCNN model.
    • Saves checkpoints periodically and after each epoch.
  • Configurable paths & params:

    python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda
  • Resume Training from Checkpoint:

    python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda --resume models/checkpoint_epoch2_batch1000.pth

βš™οΈ Installation & Setup

Installation

# Clone the repository
git clone https://github.com/Pushkkaarr/weapon-detection-system.git

# Navigate to the project folder
cd weapon-detection

# Install dependencies
pip install -r requirements.txt

πŸ—‚οΈ Directory Structure

weapon-detection/
β”‚
β”œβ”€ src/
β”‚   β”œβ”€ train_detector.py          # Training script
β”‚   β”œβ”€ video_inference.py        # Weapon detection on videos
β”‚   β”œβ”€ extract_weapon_frames.py  # Extract frames with weapons
β”‚   β”œβ”€ evaluate_model.py         # Evaluate accuracy
β”‚   └─ dataset.py                # JsonDetectionDataset class
β”‚
β”œβ”€ data/
β”‚   β”œβ”€ train/images/             # Training images
β”‚   β”œβ”€ train/labels/             # JSON annotations
β”‚   β”œβ”€ val/images/               # Validation images
β”‚   └─ val/labels/               # JSON annotations
β”‚
β”œβ”€ models/                       # Saved model checkpoints
β”œβ”€ weapon_frames/                # Extracted frames with weapons
└─ README.md

πŸ“ How to Update Paths

  • Videos: Change input_video_path in video_inference.py and extract_weapon_frames.py to the video you want to process.
  • Output folder: Change output_video_path or frames_output_dir in the scripts.
  • Trained Model: Ensure the correct checkpoint is loaded (detector_epochX.pth).

πŸš€ Running the System

  1. Train the model (optional if already trained):

    python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda
  2. Evaluate model accuracy:

    python src/evaluate_model.py
  3. Detect weapons in video:

    python src/video_inference.py
  4. Extract frames with detected weapons:

    python src/extract_weapon_frames.py

πŸ“‚ Output

  • Video Showing the video processing and actual execution of model: (Video is compressed , hence low quality)
output.compress-video-online.com.mp4
  • Weapon Detection Video: Saved at output.avi or your configured path in video_inference.py.
output.mp4
  • Weapon Frames: Saved in weapon_frames/ with names like frame_12s.jpg for frame at 12 seconds where a weapon is detected.
  • image

πŸ“Š Evaluation Plots

1. Training vs Validation Loss

The model’s error values on the training dataset (Train Loss) and on unseen validation data (Validation Loss) plotted across all training epochs. This illustrates how the model’s prediction error evolves over time for both known and unseen samples. Screenshot 2025-10-19 195100

2. Precision Over Epochs

The ratio of correctly detected weapons to the total number of detections made by the model, displayed over epochs. This indicates the consistency of correct detections relative to all predicted detections as the training progresses. Screenshot 2025-10-19 195110

3. Recall Over Epochs

The proportion of actual weapons in the dataset that are correctly detected by the model, tracked across training epochs. This shows how effectively the model identifies all relevant instances throughout training. Screenshot 2025-10-19 195121

4. F1 Score Over Epochs

The harmonic mean of Precision and Recall measured across epochs. This metric combines the balance between correct detections and coverage of actual weapons over the course of training. Screenshot 2025-10-19 195129

5. Confusion Matrix A 2Γ—2 tabular representation of classification results, showing the distribution of true positives, false negatives, false positives, and true negatives across all predictions.

Screenshot 2025-10-19 195433

6. mAP (Mean Average Precision) Over Epochs

A single aggregated metric representing both detection accuracy and localization precision of weapons across epochs. mAP captures the model’s performance in predicting correct classes and accurately placing bounding boxes over the course of training.

Screenshot 2025-10-19 195421

About

A real-time weapon detection system for security surveillance that identifies firearms and distinguishes them from harmless objects. The system processes video input, highlights detected weapons with bounding boxes, and can extract frames containing weapons for further analysis. Built using deep learning (Faster R-CNN with PyTorch) and OpenCV.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages