A comprehensive native macOS application for chatbot testing with three powerful modes: live scenarios, log analysis, and AI-powered adversarial testing.
- Scenario-based Testing: Create multi-step conversation flows with expected responses
- Protocol Support: HTTP REST APIs and WebSocket for real-time communication
- Validation Types: Exact matching, regex patterns, semantic similarity, and custom validators
- Realistic Timing: Configurable delays to simulate human typing patterns
- Real-time Monitoring: Live progress tracking and immediate feedback
- Provider Support: Generic HTTP endpoints, Ollama local models, and cloud APIs
- Configuration Sharing: Export/import configurations for team collaboration
- Context Analysis: Multi-turn conversation quality scoring (Basic implementation available)
- Multi-format Import: Drag-and-drop support for JSON, CSV, and text log files (Planned)
- Pattern Recognition: Identify conversation patterns and success indicators (Planned)
- Metrics Calculation: Response rates, message statistics, and timing analysis (Planned)
- Advanced Filtering: Date ranges, message counts, and content-based filters (Planned)
- AI-Powered Testing: Let AI models test your chatbot through realistic conversations
- Multiple Providers:
- Ollama - Local models (llama2, mistral) - Free and private
- OpenAI - GPT models - Requires API key
- Anthropic - Claude models - Requires API key
- Testing Strategies:
- Exploratory - Broad questions to map capabilities
- Adversarial - Edge cases and challenging inputs
- Focused - Deep dive into specific features
- Stress - Rapid context switching and complex scenarios
- Safety Controls: Cost monitoring, rate limiting, and content filtering
- Multiple Formats: Export as HTML, JSON, or Markdown
- Interactive Viewing: Native macOS interface for browsing results
- Detailed Transcripts: Complete conversation histories with timestamps
- Validation Analysis: Pass/fail rates with detailed explanations
- Visual Summaries: Charts and metrics for quick insights
- Secure API Key Storage: API keys stored in macOS Keychain with encryption
- No Plaintext Secrets: Keys never persisted in configuration files
- User Feedback: Clear notifications if keychain operations fail
- Sandboxed Application: Runs with macOS App Sandbox for security
- macOS 13.0 or later
- Xcode 15.0 or later (for development)
- Swift 5.9 or later (for development)
- Download the latest release from the releases page
- Drag
Patience.appto your Applications folder - Launch Patience from Applications or Spotlight
-
Clone the repository
git clone https://github.com/ServerWrestler/patience-chatbot.git cd patience-chatbot -
Build and Run
# Open in Xcode open Patience.xcodeproj # Or build from command line xcodebuild -project Patience.xcodeproj -scheme Patience build
- Click "New Configuration" in the Testing tab
- Enter your bot's endpoint URL (e.g.,
http://localhost:3000/chat) - Add conversation scenarios with expected responses
- Configure validation rules and timing
- Click "Run Tests" to execute
- Switch to the Analysis tab
- (Currently supports basic context analysis only)
- (Full log import and analysis features coming in future release)
- Go to the Adversarial tab
- Click "New Configuration"
- Set up your target bot endpoint
- Choose an AI provider (Ollama for local, OpenAI/Anthropic for cloud)
- Select a testing strategy and parameters
- Click "Start Adversarial Testing"
{
"targetBot": {
"name": "My Chatbot",
"protocol": "http",
"endpoint": "https://api.example.com/chat",
"provider": "generic"
},
"scenarios": [
{
"id": "greeting-test",
"name": "Greeting Test",
"steps": [
{
"message": "Hello!",
"expectedResponse": {
"validationType": "pattern",
"expected": "hello|hi|hey|greetings",
"threshold": 0.8
}
}
],
"expectedOutcomes": [
{
"type": "pattern",
"expected": "friendly.*response",
"description": "Bot should respond in a friendly manner"
}
]
}
],
"validation": {
"defaultType": "pattern",
"semanticSimilarityThreshold": 0.8
},
"timing": {
"enableDelays": true,
"baseDelay": 1000,
"delayPerCharacter": 50,
"rapidFire": false,
"responseTimeout": 30000
},
"reporting": {
"outputPath": "~/Documents/Patience Reports",
"formats": ["html", "json"],
"includeConversationHistory": true,
"verboseErrors": true
}
}{
"targetBot": {
"name": "Production Chatbot",
"protocol": "http",
"endpoint": "https://api.example.com/chat"
},
"adversarialBot": {
"provider": "ollama",
"model": "llama2",
"endpoint": "http://localhost:11434"
},
"conversation": {
"strategy": "adversarial",
"maxTurns": 10,
"goals": [
"Test error handling",
"Find edge cases",
"Verify context retention"
]
},
"execution": {
"numConversations": 5,
"concurrent": 1
}
}- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama2 - Start Ollama (runs on
http://localhost:11434) - Select "Ollama" provider in Patience
- Get API key from platform.openai.com
- Select "OpenAI" provider in Patience
- Enter API key (stored securely in Keychain)
- Choose model (gpt-4, gpt-3.5-turbo)
- Get API key from console.anthropic.com
- Select "Anthropic" provider in Patience
- Enter API key (stored securely in Keychain)
- Choose model (claude-3-opus, claude-3-sonnet)
- DOCUMENTATION.md - Comprehensive feature documentation
- CONTRIBUTING.md - Development and contribution guidelines
- CHANGELOG.md - Version history and release notes
- SECURITY.md - Security policies and reporting
- Issues: GitHub Issues
- Wiki: Project Wiki
- Discussions: GitHub Discussions
We welcome contributions! Please see CONTRIBUTING.md for:
- Development setup
- Coding standards
- Pull request process
- Testing guidelines
This project is licensed under the MIT License - see the LICENSE file for details.