A powerful workflow engine for Docker, Slurm and more, providing both CLI and API interfaces for seamless integration. Transform any accessible machine into a first-class member of your computational fleet.
dxflow provides a unified interface for orchestrating workflows across different computing environments with enterprise-grade security and scalability. Originally developed at DiPhyX for scientific computing, it has evolved into a production-grade engine for any distributed computing need.
- Universal Deployment - Deploy on any infrastructure: cloud VMs, GPU nodes, HPC clusters, or laptops
- Unified Interface - Consistent CLI, REST API, and intuitive web UI across all environments
- Container Orchestration - Native Docker Compose integration with real-time monitoring
- Secure by Design - RSA key-pair authentication with fine-grained access control
- Real-time Monitoring - Live logs, metrics, and workflow status tracking
- Multi-Scheduler Support - Works with Docker, Kubernetes, Slurm, PBS, and other schedulers
- Secure Tunneling - Expose services through authenticated WebSocket bridges
- Bridge Mode - Secure proxy connections for remote access and federation
π Quick Install (Linux/macOS):
wget -qO- https://raw.githubusercontent.com/diphyx/dxflow/main/assets/install.sh | sudo bashπ¦ Manual Installation:
-
Download the latest release for your platform:
# Visit https://github.com/diphyx/dxflow/releases # Or use curl for latest version: curl -L -o dxflow.tar.gz "https://github.com/diphyx/dxflow/releases/latest/download/dxflow-$(uname -s)-$(uname -m).tar.gz"
-
Extract and install:
tar -xzf dxflow.tar.gz sudo mv dxflow /usr/local/bin/ chmod +x /usr/local/bin/dxflow
-
Verify installation:
dxflow --version dxflow --help
# Start the dxflow engine
dxflow boot up
# Access the web interface
open http://localhost
# Check engine status
dxflow engine ping
# Get system information
dxflow engine info
# Update to latest version
dxflow engine updatedxflow operates as a lightweight 4-layer architecture that integrates seamlessly with existing infrastructure:
βββββββββββββββββββββββββββββββ
β Your Applications β β Run workloads unchanged
βββββββββββββββββββββββββββββββ€
β Native Schedulers β β Docker, K8s, Slurm, PBS
βββββββββββββββββββββββββββββββ€
β dxflow Engine β β Unified access layer
βββββββββββββββββββββββββββββββ€
β Your Infrastructure β β Any compute resource
βββββββββββββββββββββββββββββββ
Deployment Patterns:
- Single Node: All-in-one development and testing
- Hub-Node: Centralized control with distributed execution
- Federated: Multiple interconnected dxflow instances
- Bridge Mode: Secure tunneling for remote access
- Computational Chemistry: GROMACS, Quantum ESPRESSO, custom solvers
- Bioinformatics: Genomics pipelines, protein folding simulations
- Physics: CFD simulations, materials science modeling
- Machine Learning: Multi-GPU training, distributed inference
- Data Processing: Large-scale ETL and analytics pipelines
- CI/CD: Distributed testing and deployment workflows
- Edge Computing: IoT data processing and edge-to-cloud workflows
- Development: Multi-environment testing and staging
- Course Labs: Consistent computational environments for students
- Research Groups: Shared access to GPU clusters and HPC resources
- Collaboration: Multi-institutional research projects
Comprehensive documentation is available in the following sections:
- Getting Started - Installation and first steps
- User Interface - Web-based management console
- CLI Reference - Complete command-line interface guide with command matrix
- API Documentation - REST API integration and endpoints
- Advanced Concepts - Architecture and deployment patterns
- Licensing - License management and permissions
- Boot Configuration - Engine startup and daemon modes
- Authentication - Security and access control
- Streaming - Real-time data and event handling
- Tunneling - Secure proxy and bridge connections
- FAQs - Common questions and solutions
Pre-configured workflows and applications ready to deploy:
- Getting Started - Learn how dxflow Hub works and deploy workflows
- Genomics - DNA/RNA sequencing analysis workflows
- Molecular - Molecular simulation tools (GROMACS, Amber)
- Structural - Cryo-EM and structure prediction workflows
- Data Science - Jupyter, VS Code, Python/R environments
- Fluid Flow - CFD tools (OpenFOAM, SU2)
Each workflow includes complete Docker Compose configurations, setup guides, and best practices. Browse the hub to find production-ready solutions for your research domain.
- OS: Linux (any distribution), macOS 10.14+, Windows 10+
- Architecture: x86_64 (AMD64) or ARM64
- Memory: 512MB RAM
- Storage: 100MB disk space
- Network: Internet connection for installation
- Memory: 2GB+ RAM for production workloads
- Storage: 1GB+ for logs and temporary files
- Network: Stable connection for distributed deployments
| Platform | Status | Notes |
|---|---|---|
| Linux | β Full Support | All distributions, containers, HPC |
| macOS | β Full Support | Intel and Apple Silicon |
| Windows | β Full Support | Native and WSL2 |
dxflow includes a General License that provides:
- β Free until 2030 - No cost for core functionality
- β Full Feature Access - All core modules included
- β No Registration Required - Start using immediately
- β Production Ready - No limitations for real workloads
For advanced features like bridge connections or custom licensing, see the licensing documentation.
- Documentation: Start with our comprehensive guides above
- Issues: GitHub Issues for bugs and feature requests
- Direct Support: info@diphyx.com for enterprise needs
- Email: info@diphyx.com
- Schedule Call: Book 30-minute consultation
- Phone: +1 (619) 693-6161
- Website: diphyx.com
dxflow is developed by DiPhyX, a company founded by scientists with over 20 years of combined experience on national supercomputers and more than 50 published papers. We understand the challenges of computational research and build tools to accelerate scientific discovery.
dxflow began as an internal initiative at DiPhyX to streamline sprawling scripts, clusters, and ad-hoc logs that slow down scientific progress. Initially a weekend hack intended to create a "MLFlow for physics and chemistry," it has evolved through numerous projects in bioinformatics, CFD, and materials science into a robust, production-grade engine available for everyone.
To accelerate scientific innovation by providing unified, scalable, and intuitive cloud platforms for end-to-end computational pipelines.
- Scientific-First: Built for real research needs, not just enterprise IT
- No Vendor Lock-in: Runs on your existing infrastructure
- Researcher-Friendly: Designed by scientists who understand computational workflows and the pain of failed overnight runs