βββ βββββββββββ βββββββ βββ ββββββββββ βββββββββββββββ
ββββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
ββββββ βββββββββββ ββββββ ββββββββββββββ ββββββ
ββββββ βββββββββββ ββββββ ββββββββββββββ ββββββ
ββββ ββββββββββββββββββββββββββββββββ βββββββββββββββββββ
βββ βββββββββββ βββββββ βββββββ βββ βββ βββββββββββββββ
_sec
AI Security Research & Tools
ββββββββββββββββββββββββββββββββββββββββ
We specialize in offensive security research for AI systems. Our focus: finding vulnerabilities in LLMs, AI agents, and RAG architectures before attackers do.
π΄ AI Red Teaming Adversarial testing of production AI systems
π‘οΈ LLM Security Assessment Prompt injection, jailbreaks, guardrail testing
π€ Agent Vulnerability Tool abuse, MCP attacks, agentic exploitation
π RAG Security Research Data exfiltration, context poisoning vectors
| Project | Description |
|---|---|
| llm-security-payloads | 200+ curated LLM attack payloads |
| agentaudit-cli | Command-line AI security scanner (coming soon) |
π xsourcesec.com
π app.xsourcesec.com
π§ security@xsourcesec.com
ββββββββββββββββββββββββββββββββββββββββ
AgentAudit β Automated AI Security Testing