Simple video redaction tool that detects people, faces and other objects using YOLO and applies redaction (blur / pixelate / black box).
This repository contains two UIs and a backend processing pipeline:
redactify/— Python pieces (model downloader andredact video.py and app.pyfor processing).models/— place model weights (e.g.yolov9c.pt,yolov8n-face.pt) here.
This README explains how to set up the project, run it locally, and common troubleshooting steps.
- Use the Streamlit UI for local experiments (single-machine):
pip install -r requirements.txt(see note abouttorchbelow)streamlit run app.py
- Use the React frontend with a backend API for a production-like setup.
- Ensure model weights live in
./models/and thatultralytics/torchare installed in your Python venv.
- Python 3.8+ and
virtualenvor equivalent. - Node.js (for the React frontend) if you want to run the SPA in
redactify-front/. - Enough disk space for model weights (hundreds of MB each).
- (Optional) CUDA-enabled GPU + matching
torchwheel for much faster inference.
- Create and activate a virtual environment (example):
cd /path/to/redactify
python3 -m venv .venv
source .venv/bin/activate- Install Python dependencies:
# project-root requirements (Streamlit + helpers)
pip install -r requirements.txt
# If you will use the backend folder requirements instead:
# cd redactify-back
# ./.venv/bin/python3 -m pip install -r requirements.txt- Install PyTorch (CPU) if not present (choose CPU or appropriate CUDA build):
# CPU-only wheel (example)
python -m pip install "torch" --index-url https://download.pytorch.org/whl/cpuNotes:
ultralyticsdepends ontorch. Installingtorchfirst (with the correct CUDA variant for your machine) avoids surprises.- If you plan to use GPU, install the matching CUDA
torchwheel recommended by PyTorch: https://pytorch.org/get-started/locally/
Put model files under ./models/ (project root). Example expected filenames include:
models/yolov9c.pt(general detector)models/yolov8n-face.pt(specialized face detector, optional)
You can download models manually or use the download_models.py script in redactify-back/:
# from project root or redactify-back/
python3 download_models.py
ls -l modelsIf the script cannot download automatically (network or permission issues), place the files manually in models/.
Streamlit runs the bundled app.py which calls the local processing function. This is easiest for experiments.
# Activate venv first
source .venv/bin/activate
streamlit run app.pyNotes:
- Streamlit UI runs processing synchronously by default (the UI waits while
process_videoruns). For long videos prefer the job-based API (see below) or process a short preview withframe_skip/max_framesto validate config.
- Short-term: keep using Streamlit for local testing. Use
frame_skipandmax_framessettings for quick iteration. - Medium-term: implement a job queue (RQ/Celery or a simple thread-based manager) and add two endpoints:
POST /api/redact— accepts video, returnsjob_idGET /api/status/<job_id>— returns job status and optional progress/logs Frontend can upload and poll status or subscribe to SSE/WebSocket for live logs.
- Long-term: use GPU inference, batching, or optimized runtimes (ONNX / TensorRT) for production-level throughput.
- "Model file not found": verify
models/exists and contains the.ptfiles. Example:
ls -l models
# if empty, download or move weights into this folder-
ModuleNotFoundErrorimportingredact_videofromapp.py: run Streamlit from project root so local module imports work, or add an__init__.pyand use package imports. -
ultralytics/torcherrors: ensuretorchis installed and the CUDA/CPU build matches your environment. Checkpython -c "import torch; print(torch.__version__, torch.cuda.is_available())". -
Slow processing / UI hangs: enable
frame_skipand smallerINFERENCE_MAX_DIM, or run on GPU. Prefer job queue for large inputs.
- Add
logginginstead ofprintto capture structured logs in services. - Add a
tests/folder with a small sample clip and integration test usingmax_framesto catch regressions. - Keep model filenames and the downloader script in sync.
PRs welcome. Keep changes focused, add tests for processing utilities when possible, and describe performance impacts for model or pipeline changes.
This repository does not include a license file. Add an appropriate license (MIT, Apache-2.0, etc.) if you intend to share this publicly.