Welcome to the comprehensive documentation for PyPongAI, an advanced neuroevolution research platform for training AI agents to play Pong using NEAT (NeuroEvolution of Augmenting Topologies).
- Quick Start Guide - Get up and running in 5 minutes
- Installation - Detailed installation instructions and requirements
- Configuration - Understanding and customizing settings
- Architecture Overview - High-level system design and components
- NEAT Algorithm - Understanding neuroevolution and NEAT
- Training Pipeline - How AI agents are trained
- State Management - UI state system architecture
- Training Guide - Complete training workflow
- Competitive Training - ELO-based matchmaking system
- Novelty Search - Behavioral diversity and exploration
- Curriculum Learning - Progressive difficulty training
- Match Recording - Recording and replaying matches
- Analytics System - Performance metrics and visualization
- League System - Tournament and tier management
- Human vs AI - Playing against trained agents
- AI Module - Core training functions
- Game Engine - Visual game implementation
- Game Simulator - Headless training simulator
- Configuration API - Configuration constants
- RNN Implementation - Recurrent networks for temporal reasoning
- ELO System - Rating calculation and matchmaking
- Performance Optimization - Training speed optimization
- Extending the System - Adding new features
- File Structure - Project organization
- Data Formats - JSON schemas and data structures
- Testing Guide - Running and writing tests
- Troubleshooting - Common issues and solutions
PyPongAI is a production-ready research platform that combines:
- Recurrent Neural Networks for temporal memory
- ELO-based competitive training for stable skill assessment
- Novelty search for behavioral diversity
- Curriculum learning for progressive difficulty
- Comprehensive analytics for research insights
New to the project? Start with the Quick Start Guide
Want to train your first AI? See the Training Guide
Interested in the algorithms? Check out NEAT Algorithm and Novelty Search
Need API details? Browse the API Reference section
Having issues? Check Troubleshooting
Production Ready - Successfully trained for 50+ generations
- Best fitness achieved: 1876
- Stable evolution with 2 species
- ~1 second per generation training speed
- 41 unit tests covering core functionality
See the root repository for contribution guidelines. All documentation follows Markdown best practices and should be clear, concise, and example-driven.
The following legacy documents are preserved for reference:
- System Analysis - Historical system analysis
- Training (Legacy) - Original training documentation
For current documentation, refer to the structured guides above.