Point it at a target. It runs recon, logs into the app, chains vulnerabilities into attack paths, proves every finding with a working PoC, and hands back a report your blue team can act on.
No cloud. No telemetry. Your laptop, your keys, your data.
$ ptai start https://staging.acme.com --auth-flow form_post \
--auth-url /login --auth-username admin --auth-password-env APP_PASS
[+] engagement eng-e512f47b target=staging.acme.com scope=web
[auth] β Logged in as admin. Session captured, refresh in 14:32.
[recon] β 3 open ports, 7 subdomains, Apache/PHP fingerprint.
[web] β 21 findings behind auth. 3 SQLi, 4 XSS, missing CSP, CSRF gap.
[chain] β Attack path found in 2 hops:
reflected XSS + cookie without Secure flag β admin session hijack
[validate] β 3 findings proven with non-destructive PoCs.
[detect] β Generated Sigma, SPL, KQL rules for the blue team.
[report] β reports/eng-e512f47b.html Β· 12 pages Β· client-ready
Total: 4m 18s. Cost: $0.73 in Claude tokens.That was one command. You were pouring coffee.
pip install ptaiAlready pay for Claude Pro or Max? Skip the API key. Wire ptai into Claude Code as an MCP server and your subscription runs the engagement.
Option A β one-line CLI (Claude Code users):
claude mcp add pentest-ai -- ptai mcpDone. Restart Claude Code and the tools show up.
Option B β interactive wizard (Claude Desktop, Cursor, VS Code Copilot):
ptai setup --mcpAuto-detects the clients you have installed, writes their config files, and tells you to restart them.
Then, in any of those clients:
Run an authenticated pentest against staging.acme.com. Login is at /login with username admin and password in $APP_PASS. Summarize the high-severity findings when done.
Claude Code (or Cursor, or Copilot) picks up the tools, runs the engagement through your subscription, and streams results back into your conversation. Zero API spend.
For CI pipelines, scheduled runs, or standalone use without an MCP client:
export ANTHROPIC_API_KEY=sk-ant-... # Claude (best results)
# or
export OPENAI_API_KEY=sk-... # OpenAI
# or, fully local, no cloud
export OLLAMA_HOST=localhost:11434 # Ollamaptai start https://your-target.comFirst run installs the tool deps it needs (nmap, nuclei, ffuf, sqlmap, gobuster, and more). No setup afterwards.
| π€ Autonomous | Ten agents cover recon, web, AD, cloud, chaining, PoC, detection, and report. They coordinate on their own. |
| π It logs in | Most scanners die at the login page. This one holds a session, rotates creds, and every downstream tool inherits the cookie. |
| π§ͺ Every finding is proven | A working proof of concept runs against the target. No more triaging 40 maybes from a noisy scanner. |
| π Your methodology, in YAML | Encode your pentest checklist as a playbook. Share it. Fork someone else's. Like Nuclei templates, for methodology. |
| π Diff mode | ptai retest <id> shows what's new, fixed, or still broken. The fix β retest β confirm loop becomes one command. |
| β‘ CI-native | A GitHub Action, GitLab template, severity gates, SARIF output, and PR comments. Works the day you drop it in. |
| π§ LLM red team | Probe your AI features for prompt injection, jailbreaks, and OWASP LLM Top 10. Eighty probes built in. |
| π Works with Claude, Cursor, Copilot | An MCP server with 35+ tools. Talk to your assistant: "diff last week's engagement against today's." |
| πΎ Runs on your laptop | MIT licensed. No cloud calls. Works offline with Ollama. Your findings stay on your disk. |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ptai start <target> β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββ ββββββββββ βββββββββββ
β recon β β β auth β β β web β
ββββββββββ ββββββββββ βββββββββββ
β
ββββββββββββββββββββββββββββββββββββββ€
βΌ βΌ
ββββββββββ βββββββββββ
β ad β ββββββββββββββββββββ β cloud β
ββββββββββ β Findings DB β βββββββββββ
β β (sqlite + evidence)β β
βββββββββΆβ scope-guarded βββββββββ
β deduplicated β
ββββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
βΌ βΌ βΌ
ββββββββ βββββββββββ ββββββββββββ
βchain β βvalidate β β detect β
ββββββββ βββββββββββ ββββββββββββ
β
βΌ
ββββββββββββ
β report β md Β· html Β· pdf Β· SARIF Β· JUnit
ββββββββββββ
Each agent runs with an LLM when you've set a key, or as a deterministic tool loop when you haven't. Either way the phase order is the same.
AppSec teams. Wire ptai into your CI. Every PR against staging gets an authenticated scan. The build fails on high-severity findings. The fix β retest β confirm loop runs on its own.
Consultants. Scope a week-long engagement, point ptai at the estate, and spend your time on the creative work instead of glueing scanners together and writing the report. The report is already written.
Bug bounty hunters. Run it over breakfast. Come back to a list of validated findings with PoCs ready to paste into HackerOne.
Red teamers. Drop your internal AD methodology into a YAML playbook. Run it against every new engagement. Share it with your team.
Developers shipping AI features. Enable --enable-llm-redteam against your chatbot. Get an OWASP LLM Top 10 report in minutes.
Your methodology as a file. Checked into git. Shared with your team.
name: internal-ad-pentest
inputs:
domain: { required: true, prompt: "AD domain" }
dc_ip: { required: true, prompt: "DC IP" }
phases:
- id: recon
tools: [nmap, masscan]
- id: ad-enum
depends_on: [recon]
condition: "any_finding(type='open_port', port=445)"
tools: [enum4linux, ldapsearch, bloodhound-python]
- id: kerberoast
requires_finding: { type: ad_user_enumerated }
tools: [impacket-getuserspns]
llm_decide: true # let the LLM skip if context says uselessptai playbook list # show installed playbooks
ptai playbook show web-app-quick # preview before running
ptai playbook run ./my-ad.yaml # executeFive playbooks ship built-in. A community catalog is coming.
# .github/workflows/security.yml
name: Security scan
on: [pull_request]
jobs:
ptai:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pip install ptai
- run: |
ptai start ${{ vars.STAGING_URL }} \
--ci \
--fail-on high \
--sarif pentest.sarif
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: pentest.sarifFindings post as a PR comment, SARIF uploads to GitHub Code Scanning, and the build fails on gated severity. GitLab and Jenkins templates in docs/ci-cd.md.
ptai |
Sn1per | Nuclei | Burp Pro | PentestGPT | |
|---|---|---|---|---|---|
| Autonomous phase loop | β | β | β | ||
| Authenticated scanning | β | partial | raw HTTP | β | |
| Exploit chaining | β | partial | |||
| PoC validation | β | partial | |||
| Diff and retest | β | ||||
| CI-native (SARIF + gates) | β | partial | partial | ||
| LLM red team | β | ||||
| YAML playbooks | β | templates | |||
| MCP server | β | ||||
| License | MIT | GPL | MIT | commercial | MIT |
- 10 agents across recon, web, AD, cloud, exploit chaining, PoC validation, detection, reporting, LLM red team, and social engineering
- 200+ tool wrappers with auto-install: nmap, masscan, nuclei, ffuf, sqlmap, gobuster, wapiti, nikto, dalfox, xsstrike, enum4linux, bloodhound-python, impacket's full suite, trufflehog, gitleaks, kube-hunter, trivy, and more
- 35+ MCP tools for LLM-driven engagements
- 3 LLM providers: Anthropic Claude, OpenAI, Ollama
- 6 output formats: Markdown, HTML, PDF, SARIF 2.1.0, JUnit XML, compliance mappings (OWASP, CWE, CVE, CVSS v3.1)
- 500 tests at 81% coverage
- MIT licensed, 100% yours
All ten agents (click to expand)
| Agent | Phase | Does |
|---|---|---|
recon |
1 | Port scan, DNS and subdomain enum, service fingerprinting |
web |
2 | Authenticated OWASP Testing Guide v4 pass |
ad |
3 | AD enum, Kerberoasting, BloodHound pathfinding, delegation abuse |
cloud |
4 | AWS, Azure, GCP IAM, misconfig, K8s RBAC, serverless |
exploit_chain |
5 | Correlates findings into multi-step attack paths |
poc_validator |
6 | Non-destructive proof of concept per finding |
detection |
7 | Sigma, SPL, KQL rules for the blue team |
report |
8 | Markdown, HTML, PDF, SARIF, JUnit, compliance maps |
llm_redteam |
opt | OWASP LLM Top 10 probes |
social_engineer |
opt | Phishing corpus and pretext generation |
Plus mobile and wireless agents for out-of-band engagements.
ptai is for authorized testing. On startup it loads a scope file. Out-of-scope hosts are refused at tool-invocation time. PoCs are non-destructive by default. Rate limits kick in automatically in stealth mode.
You are responsible for having written authorization before pointing this at anything you don't own. Don't be that person.
| Repo | What |
|---|---|
| pentest-ai | The CLI and MCP server (you are here) |
| pentest-ai-agents | Claude Code subagent definitions for the same methodology |
Running this on a team and need more? The website has the team dashboard and managed-assessment options.
The OSS tool stays OSS. Free forever.
PRs welcome. Before you submit:
ruff check . && mypy . && pytest -qSee CONTRIBUTING.md for the full flow.
MIT. Do whatever you want with it.
If ptai saved you a Sunday, star the repo. It's the only payment I ask for.
