AIsbom is a specialized security and compliance scanner for Machine Learning artifacts.
Unlike generic SBOM tools that only parse requirements.txt, AIsbom performs Deep Binary Introspection on model files (.pt, .pkl, .safetensors, .gguf) to detect malware risks and legal license violations hidden inside the serialized weights.
Install directly from PyPI. No cloning required.
pip install aisbom-cliNote: The package name is aisbom-cli, but the command you run is aisbom.
Point it at any directory containing your ML project. It scans recursively for requirements files AND binary model artifacts.
aisbom scan ./my-project-folderYou will see a combined Security & Legal risk assessment in your terminal:
🧠 AI Model Artifacts Found
┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Filename ┃ Framework ┃ Security Risk ┃ Legal Risk ┃
┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ bert_finetune.pt │ PyTorch │ CRITICAL (RCE Found) │ UNKNOWN │
│ safe_model.st │ SafeTensors │ LOW │ UNKNOWN │
│ restricted_model.st │ SafeTensors │ LOW │ LEGAL RISK (cc-by-nc-4.0) │
│ llama-3-quant.gguf │ GGUF │ LOW │ LEGAL RISK (cc-by-nc-sa) │
└─────────────────────┴─────────────┴──────────────────────┴─────────────────────────────┘
A compliant sbom.json (CycloneDX v1.6) including SHA256 hashes and license data will be generated in your current directory.
Scan models directly on Hugging Face without downloading terabytes of weights. We use HTTP Range requests to inspect headers over the wire.
aisbom scan hf://google-bert/bert-base-uncased- Speed: Scans in seconds, not minutes.
- Storage: Zero disk usage.
- Security: Verify "SafeTensors" compliance before you even
git clone.
For high-security environments, switch from "Blocklisting" (looking for malware) to "Allowlisting" (blocking everything unknown).
aisbom scan . --strict- Only permits standard ML modules (
torch,numpy,collections, etc.). - Flags any unknown global import as
CRITICAL.
Generate a GitHub-flavored Markdown report suitable for Pull Request comments.
aisbom scan . --format markdown --output report.mdAdd AIsbom to your GitHub Actions pipeline.
Behavior: The scanner returns exit code 1 if Critical risks are found, automatically blocking the build/merge.
name: AI Security Scan
on: [pull_request]
jobs:
aisbom-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Scan AI Models
uses: Lab700xOrg/aisbom@v0
with:
directory: '.'Don't like reading JSON? You can visualize your security posture using our offline viewer.
- Run the scan to generate
sbom.json. - Go to aisbom.io/viewer.html.
- Drag and drop your JSON file.
- Get an instant dashboard of risks, license issues, and compliance stats.
Note: The viewer is client-side only. Your SBOM data never leaves your browser.
AI models are not just text files; they are executable programs and IP assets.
- The Security Risk: PyTorch (
.pt) files are Zip archives containing Pickle bytecode. A malicious model can execute arbitrary code (RCE) instantly when loaded. - The Legal Risk: A developer might download a "Non-Commercial" model (CC-BY-NC) and deploy it to production. Since the license is hidden inside the binary header, standard tools miss it.
- The Solution: We look inside. We decompile bytecode and parse internal metadata headers without loading the heavy weights into RAM.
Security tools require trust. We do not distribute malicious binaries.
However, AIsbom includes a built-in generator so you can create safe "mock artifacts" to verify the scanner works.
1. Install:
pip install aisbom-cli2. Generate Test Artifacts: Run this command to create a mock "Pickle Bomb" and a "Restricted License" model in your current folder.
aisbom generate-test-artifactsResult: Files named mock_malware.pt, mock_restricted.safetensors, and mock_restricted.gguf are created.
3. Scan them:
aisbom scan .Result: You will see mock_malware.pt flagged as CRITICAL and (mock_restricted.safetensors, mock_restricted.gguf) as LEGAL RISK.
AIsbom uses a static analysis engine to disassemble Python Pickle opcodes. It looks for specific GLOBAL and STACK_GLOBAL instructions that reference dangerous modules:
os/posix(System calls)subprocess(Shell execution)builtins.eval/exec(Dynamic code execution)socket(Network reverse shells)
