-
Notifications
You must be signed in to change notification settings - Fork 2
Home
Welcome to the GraphBrew wiki! This comprehensive guide will help you understand, use, and extend the GraphBrew framework for graph reordering and benchmark analysis.
GraphBrew is a high-performance graph reordering framework that combines community detection with cache-aware vertex reordering to dramatically improve graph algorithm performance. It implements over 20 different reordering algorithms and provides tools to automatically select the best one for your specific graph.
- 20 Reordering Algorithms: From simple sorting to advanced ML-based selection (IDs 0-13, 15-20)
- Leiden Community Detection: State-of-the-art community detection for graph partitioning
- AdaptiveOrder: ML-powered perceptron that automatically selects the best algorithm
- Comprehensive Benchmarks: PageRank, BFS, Connected Components, SSSP, BC (5 automated), plus Triangle Counting (manual)
- Python Analysis Tools: Correlation analysis, benchmark automation, and weight training
- Iterative Training: Feedback loop to optimize adaptive algorithm selection
- Installation - System requirements and build instructions
- Quick-Start - Run your first benchmark in 5 minutes
- Supported-Graph-Formats - EL, MTX, GRAPH, and other formats
- Reordering-Algorithms - Complete guide to all 20 algorithms
- Graph-Benchmarks - PageRank, BFS, CC, SSSP, BC, TC explained
- Community-Detection - How Leiden clustering works
- Running-Benchmarks - Command-line usage and options
- Benchmark-Suite - Automated experiment runner
- Correlation-Analysis - Finding the best algorithm for your graphs
- AdaptiveOrder-ML - The perceptron-based algorithm selector
- Perceptron-Weights - Training and tuning the ML model
- GraphBrewOrder - Per-community reordering explained
- Cache-Simulation - Cache performance analysis for algorithms
- Adding-New-Algorithms - Implement your own reordering method
- Adding-New-Benchmarks - Add new graph algorithms
- Code-Architecture - Understanding the codebase structure
- Python-Scripts - Analysis and utility scripts
- Command-Line-Reference - All flags and options
- Configuration-Files - JSON configs and settings
- Troubleshooting - Common issues and solutions
- FAQ - Frequently asked questions
# Clone, download graphs, build, and run complete experiment
git clone https://github.com/UVA-LavaLab/GraphBrew.git
cd GraphBrew
python3 scripts/graphbrew_experiment.py --full --download-size SMALLThis single command will:
- Download benchmark graphs from SuiteSparse (87 graphs available)
- Build binaries automatically
- Pre-generate label mappings for consistent reordering
- Run all benchmarks with all 20 algorithms
- Execute cache simulations (L1/L2/L3 hit rates)
- Generate perceptron weights for AdaptiveOrder (includes cache + reorder time features)
# Auto-detect RAM and disk limits
python3 scripts/graphbrew_experiment.py --full --download-size ALL --auto-memory --auto-disk
# Set explicit limits
python3 scripts/graphbrew_experiment.py --full --download-size ALL --max-memory 32 --max-disk 100# Fill ALL weight fields (cache impacts, topology features, per-benchmark weights)
# Auto-clusters graphs and generates type_N.json files in scripts/weights/
python3 scripts/graphbrew_experiment.py --fill-weights --graphs small --max-graphs 5
# Iterative training to reach target accuracy
python3 scripts/graphbrew_experiment.py --train-adaptive --target-accuracy 85 --graphs smallAuto-Clustering Type System: AdaptiveOrder automatically clusters graphs by feature similarity and generates per-cluster weights (type_0.json, type_1.json, etc.). At runtime, it selects the best matching cluster based on graph features.
| Size | Graphs | Total | Use Case |
|---|---|---|---|
SMALL |
16 | ~62 MB | Quick testing |
MEDIUM |
28 | ~1.1 GB | Standard experiments |
LARGE |
37 | ~25 GB | Full evaluation |
XLARGE |
6 | ~63 GB | Massive-scale testing |
ALL |
87 | ~89 GB | Complete benchmark |
# Build GraphBrew
make all
# Run PageRank with LeidenCSR reordering (fast variant)
./bench/bin/pr -f your_graph.el -s -o 17:1.0:3:fast -n 3
# Run PageRank with LeidenDendrogram (hybrid variant)
./bench/bin/pr -f your_graph.el -s -o 16:1.0:hybrid -n 3
# Let AdaptiveOrder choose the best algorithm
./bench/bin/pr -f your_graph.el -s -o 14 -n 3GraphBrew typically achieves:
- 1.2-3x speedup on social networks (high modularity)
- 1.1-1.5x speedup on web graphs
- 1.0-1.2x speedup on road networks (low modularity)
The best algorithm depends on your graph's topology!
| Page | Description |
|---|---|
| Home | This page - wiki overview |
| Installation | Build requirements and instructions |
| Quick-Start | 5-minute getting started guide |
| Supported-Graph-Formats | EL, MTX, GRAPH format specs |
| Reordering-Algorithms | All 21 algorithms explained |
| Graph-Benchmarks | PR, BFS, CC, SSSP, BC, TC |
| Community-Detection | Leiden algorithm details |
| Running-Benchmarks | Manual benchmark execution |
| Benchmark-Suite | Automated experiment runner |
| Correlation-Analysis | Feature-algorithm correlation |
| AdaptiveOrder-ML | ML-based algorithm selection |
| Perceptron-Weights | Weight file format & tuning |
| GraphBrewOrder | Per-community hub ordering |
| Cache-Simulation | Cache performance analysis |
| Adding-New-Algorithms | Developer: add algorithms |
| Adding-New-Benchmarks | Developer: add benchmarks |
| Code-Architecture | Codebase structure |
| Python-Scripts | Analysis & utility tools |
| Command-Line-Reference | All CLI flags |
| Configuration-Files | JSON config reference |
| Troubleshooting | Common issues |
| FAQ | Frequently asked questions |
Last updated: January 2026