Automated Windows forensic triage, powered by AI.
AIFT turns hours of manual artifact analysis into minutes. Upload a disk image, select what to parse, and get an AI-generated forensic report - all from your browser, all running locally on your machine.
Built for incident responders who need fast answers, and simple enough for non-forensic team members to operate.
This project is under active development. Contributions are welcome. If you run into any bugs, let me know!
Upload Evidence → Select Artifacts → Parse → AI Analysis → HTML Report
- Run the app - a local web interface opens in your browser.
- Upload evidence - drag-and-drop an E01, VMDK, VHD, raw image, or archive, or point to a local path for large images.
- Pick artifacts - choose from 25+ Windows forensic artifacts, which will be parsed by Dissect.
- Get results - AI analyzes each artifact for indicators of compromise, correlates findings across artifacts, and generates a self-contained HTML report with evidence hashes and full audit trail.
No Elasticsearch. No Docker. No database. One Python script, one command.
A publicly available test image (Compromised Windows Server 2022 Simulation by Benjamin Donnachie, NIST CFReDS) was used to compare AI providers. The analysis prompt included one real IOC (PsExec) and one not observed IOC (redpetya.exe) to test each model's ability to identify true findings and avoid false positives.
| Model | Cost | Runtime | Quality | Report |
|---|---|---|---|---|
| Kimi | $0.20 | ~5 min | ⭐⭐⭐ | View report |
| OpenAI GPT | $0.94 | ~8 min | ⭐⭐⭐⭐ | View report |
| Claude Opus 4.6 | $3.01 | ~20 min | ⭐⭐⭐⭐⭐ | View report |
| Local: qwen3:8b (RTX 2070 8GB VRAM + 32k context window) |
$0 | ~2.5h | ⭐ | View report |
| Local: gpt-oss 120b (DGX Spark 128GB (V)RAM + 128k context window) |
$0 | ~20 min | ⭐⭐⭐ | View report |
git clone https://github.com/<your-repo>/aift.git
cd aift
pip install -r requirements.txtPython 3.10-3.13 is required. All dependencies are pure Python - no C libraries, no system packages.
Python 3.14+ is currently unsupported due to upstream dissect.target compatibility.
python aift.pyThe app starts and opens your browser to http://localhost:5000. On first run, a default config.yaml is created automatically.
Click the gear icon (⚙) in the top-right corner of the UI. Select your AI provider and enter the required credentials:
- For Claude or OpenAI: paste your API key and click Save.
- For Kimi: paste your Moonshot API key and click Save.
- For a local model: enter your server URL (e.g.,
http://localhost:11434/v1) and model name.
Click Test Connection to verify everything works. That's it - you're ready to go.
- Upload evidence by dragging it into the upload area (E01, VMDK, VHD, raw images, ZIP, 7z, tar), or switch to Path Mode and enter the file path for large images or directories.
- AIFT opens the image or Triage Package.
- Select artifacts manually or click Recommended. You have the option to save your selected artifacts as a profile, and load them in future cases.
- Click Parse. Progress is shown in real time.
- Enter your investigation context (e.g., "Suspected unauthorized access between Jan 1-15, 2026. Look for new accounts and remote access tools. IOC identified: abc.exe").
- Click Analyze. Per-artifact findings stream in as the AI completes each one, followed by a cross-artifact summary.
- Download the HTML report and/or the raw CSV data.
- Chat with the AI about the results - ask follow-up questions, request correlations, or drill into specific artifacts without re-running the analysis.
After analysis completes, click Show Chat on the Results page to ask follow-up questions, request cross-artifact correlations, or drill into specific CSV data - the AI references its own prior analysis and automatically retrieves matching rows when needed.
AIFT supports four AI backends and can be run completely isolated. All configuration is done through the in-app settings page.
| Provider | What You Need | Notes |
|---|---|---|
| Anthropic Claude | API key from console.anthropic.com | Recommended for analysis quality |
| OpenAI / GPT | API key from platform.openai.com | GPT-4o or later |
| Kimi | API key from platform.moonshot.ai | Moonshot AI's Kimi K2 - OpenAI-compatible |
| Local model | Any OpenAI-compatible server | Ollama, LM Studio, vLLM, text-generation-webui |
ollama pull llama3.1:70b
ollama serveIn AIFT settings: select Local, set URL to http://localhost:11434/v1, model to llama3.1:70b.
Important: set Analysis Max Tokens to match your model's context window (Settings > Advanced). For example, qwen3:8b with 32K context → set to 32000. Cloud models (Claude, OpenAI, Kimi) default to 128K and typically don't need adjustment.
When an artifact's data exceeds the context budget, AIFT automatically chunks the CSV across multiple AI calls so every row is analyzed. Chunk findings are then merged hierarchically - grouped into batches that fit the context window, merged by the AI, and repeated until a single result remains. This ensures no data is lost regardless of model size. The maximum number of merge rounds before fallback can be configured via Max Merge Rounds in advanced settings (default: 5).
A minimum of 32K tokens is strongly recommended.
API keys can also be set via environment variables instead of the UI:
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export KIMI_API_KEY="sk-..."AIFT uses Dissect by Fox-IT (NCC Group) for forensic parsing - pure Python, no external dependencies.
| Category | Artifacts |
|---|---|
| Persistence | Run/RunOnce Keys, Scheduled Tasks, Services, WMI Persistence |
| Execution | Shimcache, Amcache, Prefetch, BAM/DAM, UserAssist, MUIcache |
| Event Logs | Windows Event Logs (all channels), Defender Logs |
| File System | NTFS MFT, USN Journal, Recycle Bin |
| User Activity | Browser History, Browser Downloads, PowerShell History, Activities Cache |
| Network | SRUM Network Data, SRUM Application Usage |
| Registry | Shellbags, USB Device History |
| Security | SAM User Accounts, Defender Quarantine |
Only artifacts present in the image are shown. Unavailable artifacts are automatically grayed out.
AIFT uses Dissect for evidence loading, which supports a wide range of forensic image and disk formats.
| Category | Formats | Notes |
|---|---|---|
| EnCase (EWF) | .E01, .Ex01, .S01, .L01 |
Split segments (.E02, .E03, ...) are auto-discovered in the same directory |
| Raw / DD | .dd, .img, .raw, .bin, .iso |
Bit-for-bit disk images |
| Split raw | .000, .001, ... |
Segmented raw images - pass the first segment |
| VMware | .vmdk, .vmx, .vmwarevm |
Virtual disk and VM config (auto-loads associated disks) |
| Hyper-V | .vhd, .vhdx, .vmcx |
Legacy and modern Hyper-V formats |
| VirtualBox | .vdi, .vbox |
VirtualBox disk and VM config |
| QEMU | .qcow2, .utm |
QEMU Copy-On-Write and UTM bundles |
| Parallels | .hdd, .hds, .pvm, .pvs |
Parallels Desktop images |
| OVA / OVF | .ova, .ovf |
Open Virtualization Format |
| XenServer | .xva, .vma |
Xen and Proxmox exports |
| Backup | .vbk |
Veeam Backup files |
| Dissect native | .asdf, .asif |
Dissect acquire output |
| FTK / AccessData | .ad1 |
Logical images |
| Archives | .zip, .7z, .tar, .tar.gz |
Extracted and scanned for evidence files inside |
Evidence can also be provided as a directory path (e.g., KAPE, Velociraptor, or UAC triage output).
For images over 2 GB, use Path Mode instead of uploading - enter the local file path and AIFT reads it directly.
Features under active development:
- Multi-Image Support: Analyze multiple evidence sources in a single case (e.g., workstation + server + domain controller). Includes cross-system correlation to identify lateral movement and shared IOCs.
- Linux Support: Full analysis of Linux disk images using Dissect. Covers bash/zsh/fish history, wtmp/btmp, syslog, journald, cron jobs, systemd services, SSH keys, package history, and user accounts.
- Mobile Support: iOS and Android device analysis using iLEAPP and ALEAPP. Covers call logs, SMS, browser history, installed apps, location data, and more.
AIFT is built with forensic defensibility in mind:
- Evidence is read-only. Disk images are never modified. Dissect opens everything in read-only mode.
- SHA-256 + MD5 hashing on intake and before report generation. Hash match is verified and shown in the report.
- Complete audit trail. Every action (upload, parse, analyze, report) is logged with UTC timestamps to a per-case
audit.jsonlfile. - AI guardrails. The AI is instructed to cite specific records, state uncertainty explicitly, and never fabricate evidence. Findings include confidence ratings (HIGH / MEDIUM / LOW).
- Prompt audit trail. Every prompt sent to the AI (system prompt + user prompt) is saved to the case's
prompts/directory. This allows full review of exactly what the AI was asked, regardless of provider. - Disclaimer in every report. AI-assisted findings must be verified by a qualified examiner before use in legal or formal proceedings.
AIFT generates a self-contained HTML report - all CSS inlined, no external dependencies. Open it in any browser, print it, or archive it. The report includes:
- Evidence metadata and hash verification
- Executive summary with confidence assessment
- Per-artifact findings with cited evidence
- Investigation gaps and recommended next steps
- Complete audit trail
Parsed artifact data is also available as a downloadable CSV bundle for further analysis.
- Python 3.10-3.13
- Python 3.14+ is currently unsupported due to upstream
dissect.targetcompatibility - 8 GB RAM minimum (for parsing large artifacts)
- Disk space: ~2× the evidence file size (for parsed CSV output)
- No C library dependencies - Dissect is pure Python
aift/
├── aift.py # Entry point - run this
├── config.yaml # Created on first run
├── requirements.txt # Python dependencies
├── app/ # Backend (Flask routes, parsing, analysis, reporting)
├── config/ # Application configuration files
├── images/ # Branding assets
├── profile/ # Artifact selection presets
├── prompts/ # AI prompt templates (customizable)
│ ├── artifact_instructions/ # Per-artifact analysis guidance
├── static/ # Frontend assets (CSS + vanilla JS)
├── templates/ # Jinja2 templates (UI + report)
├── tests/ # Unit tests
└── cases/ # Case data (created at runtime)
└── <case-id>/
├── evidence/ # Uploaded evidence files
├── parsed/ # Parsed artifact CSVs
├── prompts/ # Saved AI prompts (auto-generated)
├── reports/ # Generated HTML reports
├── chat_history.jsonl # Chat conversation log (per case)
└── audit.jsonl # Append-only audit trail
Prompt templates in prompts/ are plain markdown files. Edit them to tune AI analysis behavior without touching code. The chunk_merge.md template controls how findings from chunked analysis (used for small-context local models) are merged into a single result.
The config/artifact_ai_columns.yaml file controls which columns from each parsed artifact are sent to the AI - edit it to include or exclude fields per artifact to fine-tune what the AI sees.
During analysis, every prompt sent to the AI is saved to the case's prompts/ directory for audit and reproducibility.
AIFT output is AI-assisted. All findings must be independently verified by a qualified forensic examiner before use in any legal, regulatory, or formal investigative proceeding. The AI analyzes only the data provided and may not capture all relevant artifacts or context.
When using a cloud-based AI provider, parsed artifact data is sent to external servers for analysis. Be mindful of the sensitivity of the evidence - if the data is subject to privacy regulations, legal restrictions, or confidentiality requirements, consider using a local model instead.
AIFT is released as open source by Flip Forensics and made available at https://github.com/FlipForensics/AIFT.
License terms: AGPL3 (https://www.gnu.org/licenses/agpl-3.0.html).
Contact: [email protected]

