Skip to content

algonomad571/LoadX

Repository files navigation

Mini Load Balancer (Reverse Proxy)

A lightweight, high-performance load balancer implementation in Python with support for multiple load balancing algorithms, health checks, and fault tolerance.

πŸš€ Features

  • Multiple Load Balancing Algorithms

    • Round Robin - evenly distribute requests
    • Least Connections - route to server with fewest active connections
    • IP Hash - consistent routing based on client IP for sticky sessions
  • Health Monitoring

    • Periodic health checks for all backend servers
    • Automatic failover when servers become unavailable
    • Graceful recovery when servers come back online
  • High Performance

    • Asynchronous I/O using asyncio for handling concurrent requests
    • Non-blocking request processing
    • Configurable connection limits and timeouts
  • Observability

    • Comprehensive request logging
    • Real-time metrics collection (RPS, response times, error rates)
    • Status and metrics endpoints for monitoring
  • Easy Deployment

    • Docker Compose setup for local development and testing
    • Configurable via JSON configuration file
    • Multiple backend servers for testing scalability

πŸ› οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Client    │───▢│  Load Balancer  │───▢│  Backend 1  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚                 β”‚    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
                   β”‚  - Algorithms   β”‚    β”‚  Backend 2  β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚  - Health Check β”‚    β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚   Client    │───▢│  - Metrics      │───▢│  Backend 3  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚  - Logging      β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“‹ Prerequisites

  • Python 3.8+
  • Docker and Docker Compose (for containerized testing)
  • Required Python packages (see requirements.txt)

🚦 Quick Start

1. Install Dependencies

pip install -r requirements.txt

2. Start Backend Servers

# Start individual backend servers
python backend_servers/server.py --port 8001 --name backend-1 &
python backend_servers/server.py --port 8002 --name backend-2 &
python backend_servers/server.py --port 8003 --name backend-3 &

3. Start Load Balancer

python src/main.py --config config.json

The load balancer will start on http://localhost:8080

4. Test the Setup

# Make requests to the load balancer
curl http://localhost:8080/

# Check load balancer status
curl http://localhost:8080/__status__

# View metrics
curl http://localhost:8080/__metrics__

🐳 Docker Deployment

Start with Docker Compose

# Start all services (load balancer + 3 backend servers)
docker-compose up -d

# Scale to more backend servers
docker-compose --profile scaling up -d

# View logs
docker-compose logs -f load-balancer

Stop Services

docker-compose down

βš™οΈ Configuration

Edit config.json to customize the load balancer:

{
  "proxy_port": 8080,
  "proxy_host": "0.0.0.0",
  "algorithm": "round_robin",
  "backend_servers": [
    {"host": "localhost", "port": 8001, "weight": 1},
    {"host": "localhost", "port": 8002, "weight": 1},
    {"host": "localhost", "port": 8003, "weight": 1}
  ],
  "health_check_interval": 30,
  "health_check_timeout": 5,
  "max_connections": 1000,
  "connection_timeout": 30,
  "request_timeout": 30,
  "enable_logging": true,
  "enable_metrics": true
}

Algorithm Options

  • round_robin - Distribute requests evenly across servers
  • least_connections - Route to server with fewest active connections
  • ip_hash - Use client IP hash for consistent routing (sticky sessions)

πŸ“Š Load Testing

Run Load Tests

# Basic load test
python tests/load_test.py --requests 1000 --concurrency 20

# Heavy load test
python tests/load_test.py --requests 5000 --concurrency 100

# Test specific endpoint
python tests/load_test.py --requests 500 --concurrency 25 --path /info

# Save results to file
python tests/load_test.py --requests 1000 --concurrency 50 --output results.json

Automated Test Suite

# Run comprehensive test suite
chmod +x tests/run_tests.sh
./tests/run_tests.sh

πŸ“ˆ Monitoring Endpoints

  • /__health__ - Load balancer health status
  • /__metrics__ - Performance metrics (JSON)
  • /__status__ - Detailed status with backend information

Sample Metrics Response

{
  "uptime_seconds": 300.45,
  "total_requests": 1500,
  "total_errors": 12,
  "requests_per_second": 4.98,
  "error_rate": 0.8,
  "response_times": {
    "average": 45.2,
    "minimum": 12.1,
    "maximum": 342.7,
    "p95": 156.3
  },
  "status_codes": {
    "200": 1488,
    "500": 12
  }
}

πŸ§ͺ Testing Scenarios

1. Algorithm Testing

Test different load balancing algorithms:

# Test Round Robin
sed -i 's/"algorithm": ".*"/"algorithm": "round_robin"/' config.json
python src/main.py &
python tests/load_test.py -n 100 -c 5

# Test Least Connections
sed -i 's/"algorithm": ".*"/"algorithm": "least_connections"/' config.json
# Restart and test...

# Test IP Hash
sed -i 's/"algorithm": ".*"/"algorithm": "ip_hash"/' config.json
# Restart and test...

2. Failover Testing

# Start all services
docker-compose up -d

# Run continuous load test
python tests/load_test.py -n 1000 -c 10 &

# Simulate server failure
docker-compose stop backend-1

# Observe automatic failover in logs
docker-compose logs -f load-balancer

# Restart failed server
docker-compose start backend-1

# Verify recovery

3. Scaling Testing

# Start with 3 backend servers
docker-compose up -d

# Baseline performance test
python tests/load_test.py -n 1000 -c 50 -o baseline.json

# Scale to 5 backend servers
docker-compose --profile scaling up -d

# Performance test with more servers
python tests/load_test.py -n 1000 -c 50 -o scaled.json

# Compare results

πŸ“ Project Structure

mini-load-balancer/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.py              # Entry point
β”‚   β”œβ”€β”€ proxy_server.py      # Main proxy logic
β”‚   β”œβ”€β”€ load_balancer.py     # Load balancing algorithms
β”‚   β”œβ”€β”€ health_checker.py    # Health monitoring
β”‚   β”œβ”€β”€ logger.py           # Request logging
β”‚   β”œβ”€β”€ metrics.py          # Metrics collection
β”‚   └── config.py           # Configuration management
β”œβ”€β”€ backend_servers/
β”‚   └── server.py           # Test backend server
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ load_test.py        # Load testing utility
β”‚   └── run_tests.sh        # Test automation script
β”œβ”€β”€ logs/                   # Log files
β”œβ”€β”€ config.json             # Configuration file
β”œβ”€β”€ docker-compose.yml      # Docker orchestration
β”œβ”€β”€ Dockerfile              # Load balancer container
β”œβ”€β”€ Dockerfile.backend      # Backend server container
└── requirements.txt        # Python dependencies

πŸ”§ Development

Adding New Algorithms

  1. Edit src/load_balancer.py
  2. Add new algorithm method
  3. Update get_next_server() method
  4. Update configuration validation

Extending Metrics

  1. Edit src/metrics.py
  2. Add new metric collection methods
  3. Update metrics endpoint in proxy_server.py

Custom Backend Servers

Create custom backend servers by extending the base server in backend_servers/server.py or implementing the health check endpoint (/health) in your existing services.

πŸ“‹ Performance Benchmarks

Typical performance on modern hardware:

  • Light Load: 1,000+ RPS with <50ms average response time
  • Medium Load: 500+ RPS with <100ms average response time
  • Heavy Load: 200+ RPS with <200ms average response time

Performance varies based on:

  • Backend server response times
  • Network latency
  • System resources
  • Configuration settings

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make changes and add tests
  4. Ensure all tests pass
  5. Submit a pull request

πŸ“„ License

MIT License - see LICENSE file for details

πŸ” Troubleshooting

Common Issues

  1. Port Already in Use

    # Find and kill process using port
    lsof -ti:8080 | xargs kill -9
  2. Backend Servers Not Responding

    # Check if backend servers are running
    curl http://localhost:8001/health
    curl http://localhost:8002/health
    curl http://localhost:8003/health
  3. Docker Issues

    # Restart all services
    docker-compose down && docker-compose up -d
    
    # Check container logs
    docker-compose logs load-balancer
    docker-compose logs backend-1
  4. High Error Rates

    • Check backend server health
    • Verify configuration settings
    • Monitor system resources
    • Review logs for specific errors

Debug Mode

Enable verbose logging:

python src/main.py --verbose

This provides detailed debug information about request routing, health checks, and system operations.

About

Mini Load Balancer (Reverse Proxy)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages