A high-performance Rust-based service for processing images and videos, generating cryptographic and perceptual hashes, and creating verifiable manifests. ImageChain provides a robust solution for media authentication, content verification, and similarity search.
- Image Processing: Upload and process images to generate hashes
- Video Processing: Extract frames from videos at specified intervals
- Cryptographic Hashing: SHA3-256 for file integrity verification
- Perceptual Hashing: PDQ hashing for content-based image identification
- Deep Learning Embeddings: OpenCLIP EVA (default EVA02-L-14) via external Python service (GPU optional)
- Manifest Generation: Create verifiable manifests for media files
- REST API: Simple HTTP interface for integration
-
Image Processing
- Support for multiple image formats (JPEG, PNG, WebP, etc.)
- Automatic format conversion and optimization
- Thumbnail generation
-
Video Processing
- Frame extraction at configurable intervals
- Support for multiple video formats via FFmpeg
- Efficient frame processing pipeline
-
Hashing & Fingerprinting
- Cryptographic hashing (SHA3-256)
- Perceptual hashing (PDQ)
- Content-based image identification
-
Deep Learning
- Image embeddings via OpenCLIP/EVA models (default EVA02-L-14)
- Cosine similarity for image comparison
- External Python FastAPI embedding service; configurable model/device
-
Manifest System
- JSON-based manifest format
- Cryptographic verification
- Tamper-evident design
-
Web API
- RESTful endpoints
- File upload support
- JSON responses
- CORS support
- Rust (latest stable version)
- FFmpeg (for video processing)
- Docker (optional, recommended to run full stack)
- OpenSSL (for cryptographic operations)
use imagechain::{EmbeddingModel, init};
use image::open;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize the application
init()?;
// Create a new embedding model
let model = EmbeddingModel::new();
// Load an image
let img = open("path/to/your/image.jpg")?;
// Compute embedding
let embedding = model.compute_embedding(&img)?;
println!("Generated embedding: {:?}", embedding);
Ok(())
}use imagechain::{EmbeddingModel, init};
use image::open;
fn main() -> Result<(), Box<dyn std::error::Error>> {
init()?;
let model = EmbeddingModel::new();
// Load two images
let img1 = open("path/to/first/image.jpg")?;
let img2 = open("path/to/second/image.jpg")?;
// Compute embeddings
let emb1 = model.compute_embedding(&img1)?;
let emb2 = model.compute_embedding(&img2)?;
// Compare embeddings
let similarity = EmbeddingModel::cosine_similarity(&emb1, &emb2);
println!("Similarity: {:.2}%", similarity * 100.0);
Ok(())
}sudo apt update
sudo apt install ffmpegbrew install ffmpegchoco install ffmpeg# Start the service
docker-compose up -d
# The API will be available at http://localhost:3000Notes:
- The stack starts two services from
docker-compose.yml:imagechain(Rust API) on http://localhost:3000embedding(Python OpenCLIP service) on http://localhost:8001
- Default embedding model is EVA02-L-14 (pretrained: laion2b_s9b_b144k) on CPU.
- GPU acceleration is available via the optional
embedding-gpuservice. See "Embedding Service" below.
-
Clone the repository:
git clone https://github.com/neyaadeez/imagechain.git cd imagechain -
Install system dependencies:
# Ubuntu/Debian sudo apt update sudo apt install -y ffmpeg libssl-dev pkg-config -
Build the project:
cargo build --release
-
Run the server:
./target/release/imagechain
POST /api/upload
Content-Type: multipart/form-data
file: <media_file>Query parameters:
- include_embeddings (bool, default: false) β include image/frame embeddings
- extract_frames (bool, default: true; video only) β enable/disable frame extraction
- frame_interval_secs (f64, default: 1.0; video only) β seconds between frames
- max_frames (usize, optional; video only) β cap number of processed frames. If omitted, the full video is processed.
Response
{
"success": true,
"data": {
"media_type": "Image",
"file_name": "example.jpg",
"file_size": 12345,
"created_at": "2023-01-01T00:00:00Z",
"modified_at": "2023-01-01T00:00:00Z",
"sha3_256_hash": "a1b2c3...",
"pdq_hash": "0000000000000000...",
"frames": null,
"metadata": { "embedding": [0.1, 0.2, 0.3, "..."] }
}
}Example (video) with frames and optional embeddings:
{
"success": true,
"data": {
"media_type": "Video",
"file_name": "video1.mp4",
"file_size": 13927646,
"created_at": "2025-08-31T06:59:48Z",
"modified_at": "2025-08-31T06:59:49Z",
"sha3_256_hash": "...",
"pdq_hash": null,
"frames": [
{ "timestamp_secs": 0.0, "pdq_hash": "0101...", "embedding": [0.12, 0.03, "..."] },
{ "timestamp_secs": 1.0, "pdq_hash": "1110...", "embedding": [0.11, 0.07, "..."] }
],
"metadata": {
"frame_interval_secs": 1.0,
"frame_count": 2,
"max_frames": null,
"extracted_frames": true,
"original_extension": "mp4"
}
}
}POST /api/verify
Content-Type: application/json
{
"media_type": "Image",
"file_name": "example.jpg",
"sha3_256_hash": "a1b2c3..."
}Response
{
"success": true,
"data": {
"is_valid": true
}
}use imagechain::{EmbeddingModel, init};
use image::open;
fn main() -> Result<(), Box<dyn std::error::Error>> {
init()?;
let model = EmbeddingModel::new();
let img1 = open("images/cat.jpg")?;
let img2 = open("images/dog.jpg")?;
let emb1 = model.compute_embedding(&img1)?;
let emb2 = model.compute_embedding(&img2)?;
let similarity = EmbeddingModel::cosine_similarity(&emb1, &emb2);
println!("Images are {:.2}% similar", similarity * 100.0);
Ok(())
}# Run all tests
cargo test
# Run tests with detailed output
cargo test -- --nocapture
# Run specific test module
cargo test test_embeddings -- --nocaptureBuild the development container:
docker build -t imagechain-dev .Run tests in the container:
docker run -it --rm -v $(pwd):/app -w /app imagechain-dev cargo test- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- FFmpeg team for the amazing multimedia framework
- PyTorch team for the Rust bindings
- The Rust community for their awesome ecosystem
-
Start the server:
cargo run --release
-
The server will start on
http://127.0.0.1:3000
POST /api/upload
Content-Type: multipart/form-data
file: <file>Example using curl:
curl -X POST -F "file=@/path/to/your/image.jpg" http://localhost:3000/api/uploadExample with query flags (video, 0.5s interval, cap at 100 frames, include embeddings):
curl -X POST \
-F "file=@/path/to/your/video.mp4" \
"http://localhost:3000/api/upload?extract_frames=true&frame_interval_secs=0.5&max_frames=100&include_embeddings=true"POST /api/verify
Content-Type: application/json
<manifest_json>Example using curl:
curl -X POST -H "Content-Type: application/json" -d @manifest.json http://localhost:3000/api/verifyThe manifest contains metadata about the processed media file, including hashes and other relevant information.
Example manifest for an image:
{
"media_type": "Image",
"file_name": "example.jpg",
"file_size": 12345,
"created_at": "2023-01-01T00:00:00Z",
"modified_at": "2023-01-01T00:00:00Z",
"sha3_256_hash": "a1b2c3...",
"pdq_hash": "0000000000000000...",
"frames": null,
"metadata": {}
}Create a .env file in the project root to configure the application:
RUST_LOG=info
UPLOAD_DIR=./uploads
EMBEDDING_SERVICE_URL=http://localhost:8001
# Embedding service (Python, OpenCLIP) defaults:
# MODEL_NAME=EVA02-L-14
# PRETRAINED=laion2b_s9b_b144k
# DEVICE=cpuImageChain can call an external Python FastAPI service to compute image embeddings using OpenCLIP models.
- Default model:
EVA02-L-14withlaion2b_s9b_b144k - Configure via environment variables on the Python service:
MODEL_NAME(e.g., EVA02-L-14, ViT-B-32)PRETRAINED(e.g., laion2b_s9b_b144k)DEVICE(cpuorcuda)
- Rust connects via
EMBEDDING_SERVICE_URL(e.g.,http://embedding:8001in Docker, orhttp://localhost:8001locally).
Docker Compose services:
embedding(CPU): builds frompython_service/Dockerfileand exposes port 8001.embedding-gpu(CUDA, optional): builds frompython_service/Dockerfile.cuda.- Requires NVIDIA drivers and NVIDIA Container Toolkit.
- To enable, switch
imagechain.depends_ontoembedding-gpuand setEMBEDDING_SERVICE_URL=http://embedding-gpu:8001.
Health check and model info:
curl http://localhost:8001/health
curl http://localhost:8001/modelsRun the test suite:
cargo testThis project is licensed under the MIT License - see the LICENSE file for details.