VisionFlow transforms static documents into living knowledge ecosystems. Deploy autonomous AI agents that continuously analyse your data, discovering connections while you explore results in an immersive 3D space with your team.
Your Data --> AI Agent Teams --> Knowledge Graph --> 3D Visualisation --> Team Collaboration
| Feature | VisionFlow | Traditional Tools |
|---|---|---|
| Performance | 60 FPS @ 100K nodes (GPU) | 4-15 FPS (CPU only) |
| Latency | 10ms WebSocket updates | 50-100ms+ |
| Privacy | Self-hosted, your infrastructure | Third-party APIs |
| Collaboration | Real-time multi-user 3D | Single-user text output |
| Intelligence | Autonomous 24/7 AI agents | Reactive query-based |
| Auditability | Git version control | No transparency |
Deploy in under 60 seconds:
git clone https://github.com/DreamLab-AI/VisionFlow.git
cd VisionFlow && cp .env.example .env
docker-compose --profile dev up -dAccess your instance:
| Service | URL | Description |
|---|---|---|
| Frontend | http://localhost:3001 | 3D visualisation interface |
| Neo4j Browser | http://localhost:7474 | Graph database explorer |
| API | http://localhost:4000/api | REST & WebSocket endpoints |
Your AI agents begin analysing data immediately.
Native Installation (Rust + CUDA)
# Prerequisites
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install CUDA 12.4: https://developer.nvidia.com/cuda-downloads
# Build and run
git clone https://github.com/DreamLab-AI/VisionFlow.git
cd VisionFlow && cp .env.example .env
cargo build --release --features gpu
cd client && npm install && npm run build && cd ..
./target/release/webxr
|
|
|
|
|
|
| Metric | Result | Notes |
|---|---|---|
| Max Nodes @ 60 FPS | 180,000 | GPU-accelerated physics |
| Max Nodes @ 30 FPS | 450,000 | Instanced rendering |
| Physics Speedup | 55x | GPU vs multi-threaded CPU |
| WebSocket Latency | 10ms | Binary protocol V2 |
| Bandwidth Reduction | 80% | 3.6 MB vs 18 MB per frame |
| Concurrent Users | 250+ | Stress-tested 48 hours |
| Operation | GPU (CUDA) | CPU | Speedup |
|---|---|---|---|
| Force Calculation | 2.3ms | 145ms | 63x |
| Position Update | 0.4ms | 12ms | 30x |
| Collision Detection | 1.8ms | 89ms | 49x |
| Total Frame | 4.5ms | 246ms | 55x |
View Detailed Benchmarks
| Nodes | Edges | Frame Time | FPS | GPU Memory |
|---|---|---|---|---|
| 1K | 2K | 0.08ms | 12,500 | 4 MB |
| 10K | 20K | 0.5ms | 2,000 | 40 MB |
| 100K | 200K | 4.5ms | 222 | 400 MB |
| 500K | 1M | 18ms | 56 | 2 GB |
| 1M | 2M | 35ms | 29 | 4 GB |
| Solution | FPS | Latency | Bandwidth | GPU Memory |
|---|---|---|---|---|
| VisionFlow | 60 | 10ms | 3.6 MB | 400 MB |
| Neo4j Bloom | 25 | 45ms | 18 MB | 1.2 GB |
| GraphXR | 35 | 28ms | 8 MB | 650 MB |
| Gephi | 8 | N/A | N/A | N/A |
| Cytoscape | 12 | N/A | N/A | N/A |
Full benchmarks: docs/reference/performance-benchmarks.md
flowchart TB
subgraph Client["Client Layer"]
React["React + Three.js"]
WebXR["WebXR / Quest 3"]
Voice["Voice UI"]
end
subgraph Server["Rust Server (Actix)"]
subgraph Actors["Actor System"]
GS["GraphState"]
PO["PhysicsOrchestrator"]
SP["SemanticProcessor"]
CC["ClientCoordinator"]
end
HP["Hexagonal Ports"]
end
subgraph Data["Data Layer"]
Neo4j[(Neo4j 5.13)]
end
subgraph GPU["GPU Compute"]
CUDA["100+ CUDA Kernels"]
end
Client <-->|"36-byte Binary Protocol"| Server
Server <--> Neo4j
Server <--> CUDA
style Client fill:#e1f5ff,stroke:#0288d1
style Server fill:#fff3e0,stroke:#ff9800
style Data fill:#f3e5f5,stroke:#9c27b0
style GPU fill:#e8f5e9,stroke:#4caf50
| Principle | Implementation |
|---|---|
| Server-Authoritative | Neo4j is the single source of truth |
| Binary Protocol | 36-byte WebSocket messages (80% bandwidth reduction) |
| GPU Offloading | Physics, clustering, pathfinding accelerated 55x |
| Actor System | 21 specialised actors for state, physics, semantics |
| Event-Driven | Domain events with pub/sub for loose coupling |
Deep Dive: Architecture Overview | Hexagonal CQRS | Actor System
| Layer | Technology |
|---|---|
| Backend | Rust 1.75+, Actix-web 4.11, Hexagonal Architecture |
| Database | Neo4j 5.13 (neo4rs) |
| GPU Compute | CUDA 12.4, cudarc, cust (100+ kernels) |
| Reasoning | OWL 2 EL, Whelk-rs, horned-owl |
| Frontend | React 18, Three.js (React Three Fiber), TypeScript |
| XR | Babylon.js, WebXR, Meta Quest 3 |
| AI | MCP Protocol, Claude, Microsoft GraphRAG |
| Networking | Binary WebSocket, QUIC/WebTransport (quinn) |
- Literature review automation - AI agents continuously analyse papers, surface connections
- Ontology development - OWL reasoning validates knowledge structures
- Collaborative exploration - Multi-user 3D navigation of research landscapes
- Corporate knowledge graphs - Integrate documents, wikis, databases
- AI-assisted discovery - Find hidden relationships across silos
- Audit compliance - Git-versioned changes with full provenance
- GraphRAG integration - Enhance LLM context with structured knowledge
- Training data curation - Visual exploration of dataset relationships
- Model interpretability - Visualise attention patterns and embeddings
- SNOMED CT / Gene Ontology - Whelk-rs handles 354K+ class ontologies
- Clinical decision support - Real-time reasoning over medical knowledge
- Drug discovery - Visualise compound-target-pathway relationships
VisionFlow uses the Diataxis framework for organised documentation:
| Type | Purpose | Start Here |
|---|---|---|
| Tutorials | Learning-oriented | Installation, First Graph |
| How-To Guides | Task-oriented | Agent Orchestration, XR Setup |
| Explanations | Understanding-oriented | Architecture, Semantic Physics |
| Reference | Information-oriented | REST API, WebSocket Protocol |
Complete Documentation: docs/
- Modular actor architecture (21 actors)
- Neo4j as primary database
- Binary WebSocket protocol V2 (36 bytes)
- 100+ CUDA kernels (55x speedup)
- OWL 2 EL reasoning with Whelk-rs
- Meta Quest 3 WebXR (Beta)
- Vircadia multi-user integration
- SPARQL query interface
- Distributed GPU compute
- Nostr identity integration
- Apple Vision Pro native app
- Federated ontologies
- Kubernetes operator
- W3C DID for agents
Detailed Roadmap: See full roadmap with quarterly sprints
We welcome contributions! See our Contributing Guide.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/VisionFlow.git
cd VisionFlow
# Setup
cargo build && cd client && npm install && cd ..
# Test
cargo test && npm test
# Submit PRContribution Areas: Bug Fixes | Documentation | Features | Performance | Testing
| Tier | CPU | RAM | GPU | Use Case |
|---|---|---|---|---|
| Minimum | 4-core 2.5GHz | 8 GB | Integrated | Development, < 10K nodes |
| Recommended | 8-core 3.0GHz | 16 GB | GTX 1060 / RX 580 | Production, < 50K nodes |
| Enterprise | 16+ cores | 32 GB+ | RTX 4080+ (16GB VRAM) | 100K+ nodes, multi-user |
Platform Support: Linux (full), macOS (CPU-only), Windows (WSL2), Meta Quest 3 (Beta)
Built on the work of:
- 3d-force-graph - Force-directed visualisation
- graph_RAG - Natural language queries
- JavaScriptSolidServer - Solid Protocol (AGPL-3.0)
Special thanks to Prof. Rob Aspin for research in immersive knowledge visualisation.
Mozilla Public License 2.0 - Use commercially, modify freely, share changes to MPL files.
Transform how you discover knowledge.
git clone https://github.com/DreamLab-AI/VisionFlow.git && cd VisionFlow && docker-compose --profile dev up -dDocumentation | Issues | Discussions
Built by the VisionFlow Team
