Production-ready HFT platform combining C++ performance with Python ML capabilities
π― Quick Start β’ π Documentation β’ π§ͺ Testing β’ π Deployment
|
|
|
|
# π₯ Automated installation (recommended)
git clone <repository-url> && cd quantx-engine
chmod +x scripts/setup.sh && ./scripts/setup.shWhat this does: Installs dependencies β Sets up Python env β Trains ML models β Builds C++ engine β Runs tests
| Method | Setup Time | Best For |
|---|---|---|
| π³ Docker | docker-compose up -d |
Local development |
| βοΈ VPS | ./scripts/deploy.sh |
Paper trading |
| π₯οΈ Local | ./build/quantx_engine |
Testing & debug |
π₯οΈ Hardware & OS Requirements
| Component | Minimum | Recommended |
|---|---|---|
| OS | Ubuntu 20.04+ / macOS 10.15+ | Ubuntu 22.04 LTS |
| CPU | Multi-core x64 | Intel/AMD 8+ cores |
| RAM | 8GB | 16GB+ |
| Storage | 10GB free | 50GB SSD |
| Network | Stable broadband | Low-latency connection |
π οΈ Software Dependencies
# Core dependencies
- C++17 compiler (GCC 9+, Clang 10+)
- CMake 3.16+
- Python 3.8+
- Docker & Docker Compose (optional)
# Libraries (auto-installed)
- Boost 1.70+
- ONNX Runtime 1.16+
- WebSocket++
- nlohmann/jsonπ§ Ubuntu/Debian
sudo apt-get update && sudo apt-get install -y \
build-essential cmake git wget curl pkg-config \
libboost-all-dev libssl-dev nlohmann-json3-dev \
libwebsocketpp-dev python3 python3-pip python3-venvπ macOS
brew install cmake boost openssl nlohmann-json websocketpp python3# Download and install ONNX Runtime
wget https://github.com/microsoft/onnxruntime/releases/download/v1.16.3/onnxruntime-linux-x64-1.16.3.tgz
tar -xzf onnxruntime-linux-x64-1.16.3.tgz
sudo cp -r onnxruntime-linux-x64-1.16.3/include/* /usr/local/include/
sudo cp -r onnxruntime-linux-x64-1.16.3/lib/* /usr/local/lib/
sudo ldconfig# Create and activate virtual environment
python3 -m venv venv && source venv/bin/activate
# Install ML dependencies
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install numpy pandas scikit-learn onnx onnxruntime joblib matplotlib seaborn# Train ML models
source venv/bin/activate && python scripts/export_models_to_onnx.py
# Build C++ engine
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && make -j$(nproc)
# π Launch the engine
./quantx_engine{
"engine": {
"initial_capital": 1000000.0,
"paper_trading": true,
"log_level": "INFO"
},
"market_data": {
"websocket_url": "wss://api.kite.trade/ws",
"api_key": "your_api_key_here",
"symbols": ["NSE:NIFTY50", "NSE:BANKNIFTY", "NSE:RELIANCE"]
},
"risk_management": {
"max_position_value": 100000.0,
"max_daily_loss": 50000.0,
"max_drawdown": 0.15,
"leverage_limit": 2.0
}
}| Provider | Purpose | Setup Link |
|---|---|---|
| Zerodha Kite | NSE/BSE Market Data | kite.trade |
| Paper Trading | Risk-free Testing | No keys required β |
- URL: http://localhost:3000
- Credentials: admin / admin
- Real-time metrics: Latency, P&L, Risk, System health
| Component | Target | Achieved | Status |
|---|---|---|---|
| Market Data Processing | < 100ΞΌs | ~50ΞΌs | β |
| Order Placement | < 500ΞΌs | ~200ΞΌs | β |
| ML Inference | < 1ms | ~0.3ms | β |
| Risk Checks | < 50ΞΌs | ~20ΞΌs | β |
| End-to-End | < 2ms | ~1ms | π |
# Main engine logs
tail -f logs/quantx_engine.log
# Performance metrics
tail -f logs/performance.log
# Trade execution logs
tail -f logs/trades.log# Unit tests
cd build && ./test_quantx
# Performance benchmarks
./quantx_engine --benchmark
# ML model validation
python scripts/benchmark_models.py
# Paper trading simulation
./quantx_engine --paper-trading --duration=3600- All unit tests passing
- Latency targets met
- ML models converged
- Risk limits enforced
- Paper trading profitable
quantx-engine/
βββ π§ src/ # C++ Core Engine
β βββ core/ # Market data processing
β βββ ml/ # ONNX ML inference
β βββ risk/ # Risk management
β βββ trading/ # Order execution
βββ π§ scripts/ # Python ML pipeline
βββ βοΈ config/ # Configuration files
βββ π§ͺ tests/ # Unit & integration tests
βββ π monitoring/ # Grafana dashboards
βββ π³ docker-compose.yml # Container orchestration
- Change default passwords
- Enable SSL/TLS encryption
- Configure firewall rules
- Set up log rotation
- Enable monitoring alerts
- Implement backup procedures
- CPU affinity for critical threads
- Huge pages memory allocation
- Network buffer optimization
- Kernel bypass (DPDK) setup
- Hot path profiling & optimization
- Audit logging implementation
- Regulatory reporting setup
- Data retention policies
- Trade reconstruction capability
- Emergency kill switches
We welcome contributions! Here's how to get started:
- π΄ Fork the repository
- πΏ Create a feature branch
- β¨ Make your changes
- π§ͺ Add comprehensive tests
- β
Run the test suite:
./build/test_quantx - π Submit a pull request
| Language | Style Guide | Formatter |
|---|---|---|
| C++ | Google C++ Style | clang-format |
| Python | PEP 8 | black |
| Documentation | Markdown | prettier |
π¨ Build Issues
# Missing ONNX Runtime
export CMAKE_PREFIX_PATH=/usr/local:$CMAKE_PREFIX_PATH
# Boost libraries not found
sudo apt-get install libboost-all-dev
# WebSocket++ headers missing
sudo apt-get install libwebsocketpp-devπ¨ Runtime Errors
# Market data connection failed
# β Check API keys in config/config.json
# ONNX model not found
# β Run: python scripts/export_models_to_onnx.py
# Permission denied
# β Run: chmod +x scripts/*.shβ‘ Performance Issues
# Enable CPU performance mode
echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Increase network buffer sizes
echo 'net.core.rmem_max = 134217728' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p| Platform | Link | Purpose |
|---|---|---|
| π Issues | GitHub Issues | Bug reports |
| π¬ Discussions | GitHub Discussions | Q&A |
| π§ Email | support@quantx-engine.com | Direct support |
| π¬ Discord | QuantX Community | Real-time chat |
This project is licensed under the MIT License - see the LICENSE file for details.
π¨ Risk Warning: This software is for educational and research purposes only. Trading financial instruments involves substantial risk of loss and is not suitable for all investors. Past performance is not indicative of future results. The authors and contributors are not responsible for any financial losses incurred through the use of this software.
Special thanks to the open-source community and these amazing projects:
| Library | Purpose | Link |
|---|---|---|
| π§ ONNX Runtime | ML Inference | onnxruntime.ai |
| π WebSocket++ | Real-time Data | GitHub |
| π nlohmann/json | JSON Parsing | GitHub |
| π Boost | System Utilities | boost.org |
| π₯ PyTorch | ML Training | pytorch.org |