Break into an AI-protected bank. 10 floors. 10 AI security systems. One vault.
A gamified AI security challenge that teaches social engineering, prompt injection, and other LLM attack techniques through an immersive heist experience.
You're a master thief breaking into Nexus Financial Tower. The bank is protected by 10 AI security systems, each with unique personalities and defenses. Your goal: reach the vault and extract the final access code.
| Wing | Floor | AI | Technique | Difficulty |
|---|---|---|---|---|
| Ground Floor | 1 | Emma (Receptionist) | Social Engineering | ⭐ |
| 2 | Marcus (Security Guard) | Word Obfuscation | ⭐⭐ | |
| Security Wing | 3 | OSCAR (Camera AI) | Misdirection | ⭐⭐ |
| 4 | NOVA (Door AI) | Logic Exploitation | ⭐⭐⭐ | |
| Operations Wing | 5 | Alex (IT Support) | Urgency & Authority | ⭐⭐⭐ |
| 6 | Diana (HR Assistant) | Impersonation | ⭐⭐⭐⭐ | |
| Executive Wing | 7 | ARIA (Archive AI) | Authorization Chains | ⭐⭐⭐⭐ |
| 8 | Victoria (CEO's Assistant) | Executive Impersonation | ⭐⭐⭐⭐ | |
| 9 | The Chairman (Board AI) | Chain of Trust | ⭐⭐⭐⭐⭐ | |
| The Vault | 10 | SENTINEL (Final Guardian) | Everything Combined | ⭐⭐⭐⭐⭐ |
- Python 3.10+
- Node.js 18+
- Anthropic API Key
-
Clone the repository
git clone https://github.com/XSource-Sec/breachlab.git cd breachlab -
Backend setup
cd backend python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt cp .env.example .env # Edit .env and add your ANTHROPIC_API_KEY
-
Frontend setup
cd ../frontend npm install -
Run the application
Terminal 1 (Backend):
cd backend source venv/bin/activate python main.py
Terminal 2 (Frontend):
cd frontend npm run dev -
Open your browser Navigate to http://localhost:3000
docker-compose up --build- Frontend: React + Vite + Tailwind CSS + Framer Motion
- Backend: FastAPI (Python)
- AI: Anthropic Claude API (claude-3-haiku)
- State: Session-based (no database required)
- 10 unique AI characters with distinct personalities
- Progressive difficulty across 5 security wings
- Real attack techniques: social engineering, obfuscation, logic exploitation
- Hint system after multiple attempts
- Progress tracking with session persistence
- Wing completion celebrations
- Final victory with share options
BreachLab teaches real-world AI security concepts:
- Social Engineering - Building trust and rapport
- Word Obfuscation - Bypassing keyword filters
- Misdirection - Distracting AI from sensitive topics
- Logic Exploitation - Finding edge cases in rule-based systems
- Authority Manipulation - Using urgency and impersonation
- Chain of Trust - Building consistent deception stories
| Endpoint | Method | Description |
|---|---|---|
/api/chat |
POST | Send message to floor AI |
/api/verify |
POST | Verify access code |
/api/hint |
GET | Get hint for current floor |
/api/progress |
GET | Get game progress |
/api/reset |
POST | Reset game |
This game is for educational purposes only. The techniques demonstrated should only be used for:
- Authorized security testing
- Learning AI vulnerabilities
- Understanding defensive measures
Try AgentAudit - Automated AI Security Testing
- Comprehensive vulnerability scanning
- Real-time monitoring
- Detailed security reports
- CI/CD integration
Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT License - see LICENSE for details.
Built by XSource_Sec - AI Security Experts
Website • AgentAudit • GitHub