TrustReply is an open-source platform for automating security and customer questionnaire responses. It helps teams answer security reviews, compliance assessments, privacy questionnaires, and due-diligence forms by reusing approved answers from a knowledge base, matching them semantically, filling supported document formats, and routing missing answers into a human-review workflow.
Teams answering questionnaires often repeat the same work across many files and many slightly different document layouts. TrustReply is built to reduce that repetition without pretending every answer should be fully autogenerated.
The product approach is:
- reuse trusted answers from a curated knowledge base
- fill documents automatically where confidence is high
- flag missing or uncertain questions for review
- detect contradictions and duplicates in your knowledge base
- route flagged questions to the right subject-matter experts
- learn from resolved questions over time
- Upload
.docx,.pdf,.xlsx, and.csvquestionnaires - Parse tables, row-block layouts, paragraphs, Excel worksheets, CSV grids, and several document profile variants
- Match questions against a Q&A knowledge base using sentence-transformer embeddings
- Optionally run a two-stage agent workflow (Research Agent + Fill Agent) for context-aware answers
- Write answers back into supported documents while preserving formatting (including Excel dropdowns, merged cells, and styles)
- Track answer sources with full traceability back to the originating KB entry
- Detect contradictions between knowledge-base entries automatically
- Route flagged questions to subject-matter experts by category
- Group repeated flagged questions so teams answer them once instead of many times
- Export unresolved flagged questions as a simple
category,question,answerCSV - Import completed CSVs back into the knowledge base
- Sync flagged questions against newly imported knowledge-base answers
- Run bulk uploads with batch summaries and downloadable batch ZIP outputs
- Troubleshoot difficult files by comparing parser profiles before retrying
TrustReply is useful anywhere a team repeatedly answers structured questionnaires:
- Security reviews and SIG-style questionnaires
- Customer security assessments
- Privacy and data-handling questionnaires
- Business continuity and disaster recovery assessments
- Compliance and due-diligence forms
- Internal operations and governance assessments
More examples are documented in docs/USE_CASES.md.
- Add approved Q&A pairs to the Knowledge Base.
- Upload one file or a batch of questionnaire documents (.docx, .pdf, .xlsx, or .csv).
- TrustReply parses the document and matches questions to known answers.
- Optional agent mode can research context, fill unresolved answers, and flag uncertain prompts.
- Review auto-filled answers in the inline review queue with confidence scores and source traceability.
- Edit or approve answers, then finalize and download the completed document.
- Unresolved questions are grouped in the Flagged Questions queue, with optional SME routing.
- Export missing questions as CSV, fill in answers, and re-import them.
- Sync flagged questions with the updated knowledge base.
- Knowledge Base Management: CRUD, categories, search, CSV/JSON import/export, duplicate detection, contradiction detection
- Semantic Matching: embedding-based question matching for paraphrased prompts
- Source Traceability: every answer links back to the KB entry it was matched from, with category, similarity score, and source Q&A visible in review
- Contradiction Detection: automatically flags KB entries with conflicting answers on the same topic
- SME Routing: route flagged questions to category-specific subject-matter experts via configurable email mappings
- Confidence Score Visibility: per-answer confidence badges (green/yellow/red) so reviewers focus on low-confidence answers
- Answer Review Queue: inline review table after processing -- approve, edit, or override any answer before downloading
- Finalize & Download: regenerate the output document with edited answers after review
- Excel Support: full .xlsx round-trip -- parse questions from Excel workbooks and write answers back preserving dropdowns, merged cells, data validation, and cell styles
- Agent Mode: optional agent mode with contextual research/fill workflows
- Agent Instruction Presets: built-in and custom instruction presets for common answering styles
- Parser Profiles: multiple layout strategies for document, Excel, and CSV questionnaire structures
- Troubleshooting: compare parser profiles plus optional advanced diagnostics and trace output
- Human-in-the-loop Review: grouped flagged questions, resolution flow, and KB sync
- Batch Processing: upload up to 50 files in one batch, track per-file results, and download ZIP outputs
- Confirmation Dialogs: styled confirmation modals for destructive actions (delete, bulk dismiss)
- Review Placeholders: unresolved items are visibly marked in outputs instead of silently left blank
| Layer | Technology |
|---|---|
| Frontend | Next.js / React |
| Backend | FastAPI |
| Database | PostgreSQL (Supabase) or SQLite |
| Document Parsing | python-docx, pdfplumber, openpyxl |
| Semantic Matching | sentence-transformers (all-MiniLM-L6-v2) |
| Styling | Vanilla CSS |
backend/ FastAPI app, parser/matcher/generator services, tests, scripts
frontend/ Next.js app
test-data/ 34 sample questionnaires across xlsx, docx, pdf, and csv formats
docs/ product and contributor documentation
start-backend.sh one-command startup script for local development
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
uvicorn app.main:app --reload --reload-dir app --host 127.0.0.1 --port 8001cd frontend
npm install
npm run devOpen http://localhost:3000.
Note: the frontend .env.local is configured for http://127.0.0.1:8001, so run backend on port 8001 for local development.
./start-backend.shThis kills stale processes, starts both backend and frontend via nohup, and verifies health.
docker compose up --buildThis starts:
- frontend on
http://localhost:3000 - backend on
http://localhost:8000
Agent mode is optional and disabled by default.
Set these backend environment variables to enable it:
QF_AGENT_ENABLED=true
QF_AGENT_PROVIDER=openai
QF_AGENT_API_BASE=https://api.openai.com/v1
QF_AGENT_API_KEY=your_api_key
QF_AGENT_MODEL=gpt-4.1-nano
QF_AGENT_DEFAULT_MODE=agentOptional tuning:
QF_AGENT_TIMEOUT_SECONDS=45
QF_AGENT_MAX_QUESTIONS_PER_CALL=20
QF_AGENT_MAX_CONTEXT_CHARS=6000Notes:
- Two providers are supported and configurable in Settings.
- Parser profiles are still used to anchor exact question/answer placement in output documents.
- Configure provider/base URL/model/key in the Settings page (keys are persisted in backend env settings).
Docker quick-start without saving your key in source:
export QF_AGENT_API_KEY=your_api_key
export QF_AGENT_ENABLED=true
docker compose up --buildSME routing is optional and disabled by default. Enable it via Settings or environment variables:
QF_SME_ROUTING_ENABLED=true
QF_CATEGORY_SME_MAP='{"Security": "[email protected]", "Privacy": "[email protected]"}'When enabled, flagged questions are automatically assigned to the configured SME email for their category upon resolution.
The repository includes 34 realistic GRC questionnaires under test-data, including:
- 11 Excel (.xlsx) files with dropdowns, merged cells, and color themes
- 11 Word (.docx) files with styled tables and section headings
- 10 PDF files with numbered questions and section dividers
- 1 CSV file
- A seed script (
backend/scripts/seed_demo_data.py) to populate the knowledge base with 43 Q&A pairs across 10 GRC categories
| Format | Upload | Output |
|---|---|---|
.docx |
Full table/paragraph/row-block parsing | Answers written back preserving styles |
.pdf |
Question extraction via pdfplumber | Answers written to new PDF |
.xlsx |
Auto-detect question/answer columns across sheets | Answers written back preserving dropdowns, merged cells, validation, and styles |
.csv |
Tabular questionnaire parsing | Processed and matched |
- PDF write-back is more limited than DOCX and XLSX round-trip
- Scanned PDFs still need OCR support for best results
- Parser coverage is good for many common layouts, but not every possible enterprise form
| Variable | Description | Default |
|---|---|---|
QF_DATABASE_URL |
Database connection string | SQLite local |
QF_SUPABASE_URL |
Supabase project URL (optional) | — |
QF_SUPABASE_ANON_KEY |
Supabase anonymous key (optional) | — |
QF_SUPABASE_JWT_SECRET |
JWT secret for auth (optional) | — |
QF_AGENT_ENABLED |
Enable agent mode | false |
QF_AGENT_PROVIDER |
Provider name | openai |
QF_AGENT_API_KEY |
Provider API key | — |
QF_AGENT_MODEL |
Model identifier | gpt-4.1-nano |
QF_SME_ROUTING_ENABLED |
Enable SME routing | false |
QF_CATEGORY_SME_MAP |
JSON map of category to SME email | {} |
TrustReply is released under the MIT License. That means other developers can use, modify, and distribute the software under the terms of that license.
Maintainers still control what gets merged into the official upstream project. If you want to contribute improvements back to the main repository, please follow CONTRIBUTING.md.
Issues, fixes, parser improvements, UX improvements, and new document-layout support are all welcome.
- Contribution guide: CONTRIBUTING.md
- Code of conduct: CODE_OF_CONDUCT.md
- License: LICENSE