Privacy-first document Q&A with local RAG
SafeQueryAI is a privacy-first architecture for document question-answering. It lets users upload PDF and CSV files, ask natural-language questions, and receive grounded answers from a local LLM runtime.
SafeQueryAI addresses a common business need: extracting actionable information from private documents without exposing that data to external services.
The product enforces session-based processing:
See Architecture and Security & Privacy for full detail.
| Item | Default Value |
|---|---|
| Frontend URL | http://localhost:5173 |
| Backend API | http://localhost:5000/api |
| Swagger UI (development) | http://localhost:5000/swagger |
| Ollama URL | http://localhost:11434 |
| Supported file types | .pdf, .csv |
| Session timeout | 60 minutes |
| Max file size | 20 MB (25 MB absolute request ceiling) |
| Section | Focus |
|---|---|
| Business Overview | Problem statement, users, scope, risks, success criteria |
| Getting Started | Local setup and first run |
| Architecture | System flow, components, constraints |
| Security & Privacy | Data handling model and trust assumptions |
| API Reference | Endpoint contract and examples |
| Frontend Guide | UI structure and frontend integration points |
| Testing | Test strategy, suites, and execution commands |
| Deployment | Current deployment approach and operational notes |
| Roadmap | Planned improvements and delivery priorities |
| FAQ | Common operational and setup questions |
Last updated: March 2026