Enterprise LLM Security in 3 Lines of Code
Stop prompt injection, jailbreaks, and data leaks in production LLM applications.
pip install promptshieldsfrom promptshield import Shield
shield = Shield.balanced()
result = shield.protect_input(user_input, system_prompt)
if result['blocked']:
return {"error": "Unsafe input detected"}That's it. Production-ready security in 3 lines.
| Feature | PromptShields | DIY Regex | Paid APIs |
|---|---|---|---|
| Setup Time | 3 minutes | Weeks | Days |
| Cost | Free | Free | $$$$ |
| Privacy | 100% Local | Local | Cloud |
| Accuracy | 98% | ~60% | ~95% |
| ML Models | Included | None | Black box |
- ✅ Prompt injection attacks
- ✅ Jailbreak attempts
- ✅ System prompt extraction
- ✅ PII leakage
- ✅ Session anomalies
Don't use one shield everywhere. Layer them strategically:
Choose the right tier for your application:
Shield.fast() # ~1ms - High throughput (pattern matching)
Shield.balanced() # ~2ms - Production default (patterns + session tracking)
Shield.strict() # ~7ms - Sensitive apps (+ 1 ML model + PII detection)
Shield.secure() # ~12ms - Maximum security (+ 3 ML models ensemble)📚 Full Documentation - Complete guide with framework integrations
⚡ Quickstart Guide - Get running in 5 minutes
MIT License - see LICENSE
Built by Neuralchemy