SafeQueryAI Documentation

Privacy-first document Q&A with local RAG

View the Project on GitHub JYOshiro/SafeQueryAI

SafeQueryAI Documentation

SafeQueryAI is a privacy-first architecture for document question-answering. It lets users upload PDF and CSV files, ask natural-language questions, and receive grounded answers from a local LLM runtime.

Product Summary

SafeQueryAI addresses a common business need: extracting actionable information from private documents without exposing that data to external services.

The product enforces session-based processing:

Business Value

High-Level Architecture

  1. Frontend receives uploads and question input.
  2. Backend stores files in temporary storage per session.
  3. Text extraction and chunking prepare data for retrieval.
  4. Embeddings and retrieval run through the local LLM runtime (Ollama).
  5. Answers are generated using only session content and returned to the user.
  6. Session timeout or manual clear removes temporary files and in-memory vectors.

See Architecture and Security & Privacy for full detail.

Environment Reference

Item Default Value
Frontend URL http://localhost:5173
Backend API http://localhost:5000/api
Swagger UI (development) http://localhost:5000/swagger
Ollama URL http://localhost:11434
Supported file types .pdf, .csv
Session timeout 60 minutes
Max file size 20 MB (25 MB absolute request ceiling)

Audience Paths

Documentation Map

Section Focus
Business Overview Problem statement, users, scope, risks, success criteria
Getting Started Local setup and first run
Architecture System flow, components, constraints
Security & Privacy Data handling model and trust assumptions
API Reference Endpoint contract and examples
Frontend Guide UI structure and frontend integration points
Testing Test strategy, suites, and execution commands
Deployment Current deployment approach and operational notes
Roadmap Planned improvements and delivery priorities
FAQ Common operational and setup questions

Last updated: March 2026