LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
-
Updated
Feb 2, 2026 - Python
LLM Penetration Testing Framework - Discover vulnerabilities in AI applications before attackers do. 100attacks + AI-powered adaptive mode.
A deterministic runtime security SDK for LLM applications that prevents prompt injection, data leakage, and rogue agent behavior using high-performance, auditable rule-based guards instead of probabilistic AI inference.
Add a description, image, and links to the langchain-security topic page so that developers can more easily learn about it.
To associate your repository with the langchain-security topic, visit your repo's landing page and select "manage topics."